TL;DR: Looks like a step toward the AlphaSmart of programming
If you don't remember the AlphaSmart[1], it was a line of typewriter-like devices with small LCD displays. They ran on batteries for hours and had limited save space. You could hook them up to a larger computer to save your drafts.
Where's the programming version of that? The recent HN post[2] about running a Mac 128k on a RPI2040[3] got me thinking about that. In theory, you could get real work done on a system like that, although it wouldn't be very good for entertainment.
So, where's the AlphaSmart of low-power computing?
1. "Real" keyboard
2. Low-power display, ideally a larger e-ink one
3. Not connected to the internet
To be clear, I don't mean a TI-83-like[4], TI-92[5], or even the recent NumWorks[6]. Those are meant to be calculators and have inconvenient calculator-like form-factors.
[1]: https://en.wikipedia.org/wiki/AlphaSmart
[2]: https://news.ycombinator.com/item?id=40699684
[3]: https://axio.ms/projects/2024/06/16/MicroMac.html
[4]: https://en.wikipedia.org/wiki/TI-83_series
i've been wanting that for years myself, and with the advent of sharp's memory-in-pixel lcd displays and ambiq's subthreshold-logic arm microcontrollers, it's become possible to make it work under a milliwatt, which means it can run purely on solar power without batteries, using parallel nand flash for mass storage at a low duty cycle. i haven't progressed beyond the earliest prototyping stage myself; my design notes on the so-called zorzpad (mostly in spanish) are in http://canonical.org/~kragen/sw/zorzpad.git/ and za3k vance has been working on a project inspired by it called the zorchpad https://blog.za3k.com/tag/zorchpad/
the primary objective of the zorzpad is longevity, with a design lifetime of 53 years. to reach that objective i think i can't use batteries, charging ports, or off-the-shelf keyswitches. this forces a lot of compromises on the system design; i haven't found a processor with an mmu that can run at under a milliwatt
("zorzpad" is a pun on "thinkpad"; it's pronounced "thorthpad" in some spanish dialects, so it's the opposite of a thinkpad)
i think e-ink displays probably use way too much power. for years i've been looking for solid power consumption numbers, but in their absence, dividing an amazon swindle's battery life by its battery capacity suggests that they use about 100 milliwatts. the zorzpad's e-ink displays are about a tenth as big, use about a thousandth as much power (100 microwatts), and can be updated at 60 hertz (though the datasheet only guarantees 20)
if you aren't worried about going batteryless, or about mass production, you could literally buy an alphasmart neo and wire its keyboard and display up to an esp32 or something
TL;DR: I love this idea and have two questions:
1. For early iterations / similar projects, could all-metal screw terminals[1] accept power internally?
2. Would supporting UXN/Varvara[2] be an option?
More on these questions below after initial comments.
## Initial Comments
I could, but it's a limited in availability. The keyboard is pretty important in my opinion.
I vaguely remember hearing about this, but don't have the EE knowledge to judge the benefits of these. Data sheet for anyone interested: https://www.sharpsde.com/fileadmin/products/Displays/Specs/L...
## 1. The screw terminal:
Disclaimer: I'm not an EE specialist.
My understanding is that if the power loss isn't too great, an all-metal internal screw terminal[1] might improve device durability:
* The power source could be replaceable. AA, AAA, LiPo, solar, etc
* If you don't solder to the metal part, even sandpaper or a conveniently shaped rock could remove oxidation
* For a case, internal screw terminals could turn a charging port into an easily replaceable component
## 2. UXN/Varvara:
From your blog, I see an "artemis apollo 3 board"[3] is being used. From a Sparkfun page[4], it seems to have enough ram to host graphical Varvara.
I was initially doubtful of UXN despite loving the idea, but:
1. The UXN community seems to have built a self-hosting ecosystem[5] of:
2. The UXN VM is light: 64k system ram, 4-bit 2-plane color, and misc state and debug3. The core UXN VM is simple: a minimal implementation fits in under 150 lines[6] of C89
[1]: https://en.wikipedia.org/wiki/Screw_terminal
[2]: https://wiki.xxiivv.com/site/varvara.html
[3]: https://blog.za3k.com/zorchpad-update-cardboard-mockup-mk1/
[4]: https://www.sparkfun.com/artemis
[5]: https://github.com/hundredrabbits/awesome-uxn
[6]: https://wiki.xxiivv.com/etc/uxnmin.c.txt
i'm glad to hear you find it appealing! i'll explain some of my thinking on these issues, some of which is specific to my own eccentric objectives (nobody designs computers to last decades) and some of which is more generally applicable
screw terminals or post terminals are a good idea if you're going to have charging and want longevity. but charging implies batteries. and batteries themselves induce lots of failures; they typically have a design lifetime of two years, more than an order of magnitude too low for the zorzpad. by itself that's not a fatal flaw, because you could replace them, but it becomes fatal in combination with either of the two supplementary flaws:
1. battery shelf life is limited to about 10 years, except for lithium thionyl chloride and so-called 'thermal batteries' that use a molten-salt electrolyte (kept frozen during storage). consequently, if your computer needs batteries, after 10 years you're dependent on access to not just a market but a full-fledged industrial economy to keep your computer running. a friend of mine in rural romania has been trying to build a usable lead-acid battery from scratch to provide energy storage for his solar panels, and it's motherfucking difficult without that supply chain
2. batteries have a lot of failure modes that destroy the devices they power. overheating, swelling, leaking corrosive electrolytes, and too much voltage are four of them. if these failure modes were very rare (like once in 100 battery-years) that might be okay, but they aren't
for me these are fatal flaws, but for others they may not be. if you can afford batteries, you can run for one hell of a long time on a single 18650 lipo on a milliwatt. a coin cell can deliver a milliwatt for a month
as for uxn, uxn is the first attempt at frugal "write once, run anywhere" that's good enough to criticize. everyone should read devine's talk about its aspirations (https://100r.co/site/weathering_software_winter.html)
however, uxn is not frugal with cpu, or fast, nor is it good for programmer productivity, and it’s not all that simple. It’s designed for interpretation, which inherently means you’re throwing away 90% or 95% of even a single-threaded cpu, and it’s not designed as a compilation target, which means that you’re stuck programming at the assembly-language level. This is pretty fucking fun, but that’s because it makes easy things hard, not hard things easy; the offgrid notes lionize oulipo (https://100r.co/site/working_offgrid_efficiently.html)
as a result of the efficiency problems, the platforms with complete uxn implementations (see https://github.com/hundredrabbits/awesome-uxn#emulators) are the luxurious ones: microsoft windows, linux, amigaos, the essence operating system, anything that can run sdl or lua and love2d, gameboy advance, nintendo 64, nintendo ds, playdate, and the nook ereader: all 32-bit or 64-bit platforms with ram of 4 mebibytes or more, except that the gba only has 384 kibibytes, and there were amigas with as little as 512k (though i don’t know if they can run uxn). more limited machines like the gameboy, ms-dos, and the zx spectrum do not have complete uxn implementations
(384k is the same amount of ram as the apollo3, so it's probably feasible, as you say)
the uxn machine code is basically forth. forth is pretty aesthetically appealing to me, and i've gotten to the point where forth code is almost as easy for me to write as c code (though i have to look things up in the manual more often and have more bugs). however, it's much, much harder for me to read forth than it is to read c. it's harder for me to read forth code than to read assembly for any of 8086, arm, risc-v, i386, or amd64. possibly i just haven't written enough forth to have the enlightenment experience yet, but currently my hypothesis is that forth is just hard to read and that's it, and that the really great thing about forth is not actually the language but the interactive development experience
the varvara display model is not a good fit for the memory-in-pixel displays, which only have one bit per pixel, while varvara requires two. also, evaluating things 60 times a second is not a good fit for low-power systems of any kind. and i'm not sure how well the existing varvara applications will work at 400×240 or tolerate a requirement to use fonts many pixels tall in order to be readable
(a lot of the above notes are from xpasm.md in http://canonical.org/~kragen/sw/pavnotes2.git/ if you're interested in the context)
as for tools for software development, program launching, editing text, editing fonts, editing other graphics, and sequencing music, i'm confident i can write those myself if i have to. i've written a compiler in a subset of scheme that compiles itself, a compiler in a forth-like language that compiles itself, various toy text editors, various virtual machines in under 150 lines of code, and various pieces of music synthesis software, and i've designed a few fonts of my own (though not using software i wrote)
maybe i should set up a patreon for this or something, maybe people would be interested in supporting the project
TL;DR: Thank you for confirming some mismatch in display hardware and project goals
Agreed. As to a terminal-mode UXN, you pointed out:
I think it's the primary use case like Java Applets and Flash preceded JS as iterations of sorta-portable and low efficiency tools which make up for it with sheer volume of high-familiarity material.
I'm curious about this. Could you please elaborate?
When trying earlier uxntal versions, the worst part seemed to be the string syntax. If it wasn't because it seemed easier at the time, my guess is the pain point might be intentional nudges away from making things the designer doesn't like.
This is where I get confused:
* Can you explain more about your views on this?
* Do you favor a specific structure of hardware, OS code, and user scripts, or whatever ends up offering the best use of power and long-term device durability?
Counting mW and decades implies text > GUI. Despite 100r & fans favoring GUI, there's an UXN terminal mode and a few math and terminal libraries written for it.
As to GUI, many of the ecosystem's GUI programs are FOSS with room to shrink the graphics. There's a decent 4x6 px ASCII font which is CC0 and could help, but as fair warning, it's awful at accents and umlauts.
This seems to be a universal opinion since REPL isn't rare anymore. Being portable and easy to bootstrap are probably the main draw now. In addition to 100r's work, I think remember hearing ${NAME}boot or another project used it to simplify porting.
Ah, this is what I suspected. I wasn't sure I understood the display you mentioned, so thank you for confirming the hardware mismatch.
Although your blog showed you'd considered two identical side-by-side displays[1], I have a radical idea which might not be a good one:
1. A "Full" display of W x H characters (the Sharp display or whatever you replace it with)
2. A "Fallback" 1 or 2 line display (2nd line reserved for editor info / UI?)
A single 1-bit editable line is so hard to work with that ed[2] isn't even installed by default on many Linux distros. However, people did once got real work done with it and similar tools.
As a funny sidenote, this is where non-colorForth[3] forths may align with UXN's dev. He's a Plan9 fan and uses its Acme editor[4]. So Varvara is theoretically capable of syntax highlighting, but he refuses to implement it as far as I know.
You may be pleasantly surprised by how many people will show up if you start building and releasing tools which:
* let them solve problems or have fun
* allow extending the tool
* can be used to build their own tools
Many will will even contribute back. To paraphrase the Gren[5] maintainer's recent livestream:
Based on your blog, I think you can safely skip to the footnotes:
* GitHub sponsors might be worthwhile if you're willing to use non-fully FOSS platforms
* Don't make people feel like they're paying for their own work
* Do let the system do real-ish creative work and self-host, even if only as emulators
If you'd like, I can provide more specific suggestions.
[1]: https://blog.za3k.com/zorchpad-update-cardboard-mockup-mk1/
[2]: https://en.wikipedia.org/wiki/Ed_(software)
[3]: https://concatenative.org/wiki/view/colorForth
[4]: https://wiki.xxiivv.com/site/acme.html
[5]: https://gren-lang.org/ (Disclaimer: I sorta contribute to this now)
[6]: https://www.youtube.com/watch?v=PO8_pV7r168 (Sorry, I don't have an exact timestamp, but I'm pretty sure it was in here)
you ask what i mean about programmer productivity. consider this python code from https://norvig.com/spell-correct.html:
in seven lines of code, it computes all the potentially incorrect words that can be made from a given correct word with a single edit. so, for example, for 'antidisestableshmentarianism', it returns a set of 1482 words such as 'antidisestauleshmentarianism', 'antidisestableshmentarianlism', 'antidiseitableshmentarianism', 'antidisestablesjhmentarianism', and 'antidiseptableshmentarianism', totaling 42194 bytes. how would you do this in uxntal?here's another part of norvig's program. this part tabulates the case-smashed frequency of every word in its 6-megabyte training set (which presumably consists only of correctly spelled words):
this takes about 340 milliseconds one core of on my laptop here, which runs at about 6000 MIPS, so it would take about 34 seconds on a machine running at 60 MIPS, maybe a little longer on the apollo3. there are 32198 distinct words in the training set, totaling 244015 characters; the most common word ('the') occurs 79809 times, and the longest word ('disproportionately') is 18 characters. so plausibly you could represent this hash table without any compression in about 500k, though cpython requires about 70 megabytes. ram compression could plausibly get those 500k down to the 384k the apollo3 has without trying to swap to offchip flashfinding the best correction for a word requiring two corrections like 'slowlyyy' takes 70ms, so plausibly it would take 10 seconds or so on the apollo3. (you could maybe do this in the background in a text editor.) (if it were compiled to efficient code, it would probably be closer to 300 milliseconds on the apollo3, because cpython's interpretive overhead is a factor of about 40.) 'disproportionatelyyy' takes 370ms. here's the rest of the correction code:
note that this requires you to have two such `edits1` sets in memory at once, though you could plausibly avoid that problem by tolerating more duplicates (double letters provoke duplicates in deletes, transposes, and replaces)norvig doesn't tell us exactly how long it took him to write the code, but he did it in a single airplane flight, except for some minor bugs which took years to find. more importantly, though, it's very easy code to read, so you can easily understand how it works in order to modify it. and that's the most important factor for programming productivity
here are some things in this code that are more difficult to write and much more difficult to read in uxntal:
- managing more than 64k of data (uxn's memory addresses are 16 bits)
- dynamically allocating lists of things such as the (left, right) tuples in splits
- dynamic memory allocation in general
- string concatenation
- eliminating duplicates from a set of strings
- iterating over the words in a text file
- generating a sequence of outputs from a sequence of inputs with a filtering predicate and a transformation function [f(x, y) for x, y in xys if p(x, y)]
- generating a lazy flat sequence of outputs from a nested loop (return (z for y in f(x) for z in f(y)))
- hash tables
- incrementally eliminating duplicates from a sequence of candidates that turn out to be valid words (set(w for w in words if w in WORDS))
- counting the number of occurrences of each string in a lazy sequence of strings
- floating-point arithmetic (which would be fairly easy to eliminate in this case, but not in many other cases; this deficiency in uxn is especially galling since the apollo3 has fast hardware floating point)
- finding the highest-rated item of a lazy sequence of candidates according to some scoring function
and all of that is on top of the general readability tax imposed by postfix syntax, where even figuring out which arguments are being passed to which subroutine is a mental challenge and a frequent source of bugs
note that these are mostly not deficiencies you can really patch with a library. i didn't mention that the program uses regular expressions, for example, because you can certainly implement regular expressions in uxntal. they're things you probably need to address at the level of language semantics, or virtual machine semantics in the case of the address space problem. and they're not tightly tied to cpython being implemented grossly inefficiently; pypy implements all the necessary semantics, and common lisp and c++ have similar facilities in most cases, though their handling of lazy sequences is a weakness that is particularly important on hardware with limited memory like the apollo3
so that's what i mean when i say that uxn is designed to make easy things hard, rather than making hard things easy
you say:
the thing is, i don't really care whether rek and devine think that autocorrecting misspellings is a bad thing to do; i want the computer to be a means of expression for my ideas, indeed for everyone's ideas, not for the ideas of a singular designer. that's the apple walled-garden mindset, and it's anathema to me. and, though i could be wrong about this, i think rek and devine would probably agree
TXR Lisp:
this doesn't look bad at all! considerably better than common lisp, in particular. but i think the flatter structure of the python improves readability, and the independence of the different clauses facilitates interactive incremental testing:
but lisps are generally pretty good at that kind of thing, so i imagine you could formulate it slightly differently in txr lisp to support that kind of thing (i just don't know txr lisp)as a semantic question, is this materializing the whole list (as the python does) or are the `add` calls inserting into the hash table as the loops run, thus eliminating duplicates?
I had a bug somewhere, so I selectively off some of the add expressions. They can be commented out with #; or by flipping add to list or identity to throw away the value.
The add is something which pairs with build. Lisp doesn't have "bag-like" lists. For those times when they are helpful, we can have procedural list building syntax. The build macro creates an environment in which a number of operators like add that build up or otherwise operate on an implicit list. When the build form terminates, it returns the list. (Its sister buildn returns the last form like progn).
In this function, I could just have used (push expr stack) because we don't care about the order; there would be no nreverse. That would be a better idea, actually.
We could also add the strings to a table directly, like (set [h (join ...)] t).
The hash table is built by the hash-list call. It associates the elements of the list with themselves, so if "a" occurs in the list, the key "a" is associated with value "a".
thanks! it sounds like a pretty effective system, although the form of incremental development you're describing is editing and rebuilding the program, more like c than the kind of repl flow i was talking about
the use of [] for indexing rather than clojure's list building (or as conventional superparentheses) is appealing
what does buildn do with the built list?
What buildn will do with the list is simply lose it. The last form can extract it and do do something with it.
When you might use it is when the goal isn't to build a list which escapes. The construct supports queuing semantics (insert at one end, take from the other), so you can use buildn to express a breadth-first traversal that doesn't return anything, or returns something other than the list:
i'll reply in more detail later but for the moment i just want to clarify that i'm not za3k, although i've been collaborating with him; his priorities for the zorchpad are a bit different from my priorities for the zorzpad
you said:
i wasn't able to parse this sentence. could you unpack it a bit?
you said:
yes, please! even if our priorities aren't exactly aligned i can surely learn a lot from you
np! I'll comment now in case it gets weird about a double-comment on the same parent by the same user.
Software that doesn't care about being well-made or efficient. It doesn't matter because it's either fun or useful.
This is a long topic, but some of it comes down to Decker[1] vs Octo[2]'s differences:
* Decker can be used to make and share things that process data
* The data itself can be exported and shared, including as gifs
* Octo can't really, but it does predate the AI commnity's "character cards" by shoving game data into a "cartridge" gif
* Octo has LISP Curse[3] issues
I'm serious about the LISP curse thing. In addition to awful function pointers due to ISA limitations, everyone who likes Octo tends to implement their own emulator, tooling, etc and then eventually start down the path of compilers.
I haven't gone that far yet, but I did implement a prototyping-oriented terminal library[4] for it. Since Gulrak's chiplet preprocessor[5] is so good, I didn't bother with writing my own.
Ty for the reminder. On that note, the larger font sizes you brought up are seeming more important in this moment. I don't think I can deal with 4 x 6 fonts on tiny screens like I once could. HN's defaults are already small enough.
[1]: https://github.com/JohnEarnest/Decker
[2]: https://github.com/JohnEarnest/Octo
[3]: https://winestockwebdesign.com/Essays/Lisp_Curse.html
[4]: https://github.com/pushfoo/octo-termlib
[5]: https://github.com/gulrak/chiplet
with respect to software that doesn't need to be efficient, sure, there are lots of things that are usable without being efficient. but in rek and devine's notes on working offgrid, which are part of the background motivation for uxn/varvara, they say:
so uxn/varvara is not intended for software that doesn't need to be efficient, very much the contrary. so, from my point of view, the fact that logic-intensive programs running in uxn consume 20× more energy than they need to is basically a mistake; rek and devine made choices in its design that keep it from meeting their own goals. which isn't to say it's valueless, just that it's possible to do 20× better
what i mean by 'logic-intensive' is programs that spend most of their time running uxn code doing some kind of user-defined computation, like compilation, npc pathfinding, or numerical integration. if your program spends most of its cpu blitting sprites onto the screen, well, that's taken care of by the varvara code, and there's nothing in the varvara definition that requires that to be inefficient. but uxn itself is very hard to implement efficiently, which creates pressure to push more complexity into varvara, and varvara has ended up fairly complex and therefore more difficult than necessary to implement at all. and i think that's another failure of uxn/varvara to fulfill its own ideals. or anyway it does much worse according to its own ideals than a better design would
how much complexity does varvara impose on you? in https://news.ycombinator.com/item?id=32219386 i said the uxn/varvara implementation for the nintendo ds is 5200 lines of c, so roughly speaking it's about 20× more complexity than uxn itself
and that's what i mean by 'and it's not that simple'. in the comment i linked above, i pointed out that chifir (the first archival virtual machine good enough to criticize, which is a somewhat different purpose) took me 75 lines of c to implement, and adding graphical output to it with yeso required another 30 lines of code. you might reasonably wonder how much complexity i'm sweeping under the carpet of yeso, since surely some of those 5200 lines of code in the uxn/varvara implementation for the nintendo ds are imposed by the ds platform and not by varvara. the parts of yeso used by my chifir implementation compiled for x-windows are yeso.h, yeso-xlib.c, and yeso-pic.c, which total 518 lines of code according to either sloccount or cloc.
still, uxn, even with varvara, is much better than anything else out there; like i said, it's the first effort in this direction that's good enough to criticize
i don't understand what you mean about octo vs. decker but possibly that's because i haven't tried to use either one
you definitely can't deal with a 4×6 font on the ls027b7dh01 display i'm using. it's 35mm tall, including the pixel-less borders, and 240 pixels tall. so 6 pixels is 0.88 mm, or a 2.5 point font. even young people and nearsighted people typically need a magnifier to read a 2.5 point font.
i just found out that my critique of chifir was apparently one of the influences on the design of uxn: http://forum.malleable.systems/t/uxn-and-varvara/107/7
with respect to winestock's account of the 'lisp curse', i think he's wrong about lisp's difficulties. he's almost right, but he's wrong. lisp's problems are somewhat social but mostly technical
basically in software there's an inherent tradeoff between flexibility (which winestock calls 'expressiveness'), comprehensibility, and performance. a glib and slightly wrong way of relating the first two is that flexibility is when you can make software do things its author didn't know it could do, and comprehensibility is when you can't. bugs are deficiencies of comprehensibility: when the thing you didn't know your code could do was the wrong thing. flexibility also usually costs performance because when your program doesn't know how, say, addition is going to be evaluated, or what function is being called, it has to check, and that takes time. (it also frustrates optimizers.) today c++ has gotten pretty far in the direction of flexibility without performance costs, but only at such vast costs to comprehensibility that large codebases routinely minimize their use of those facilities
comprehensibility is critical to collaboration, and i think that's the real technical reason lisp programs are so often islands
dynamic typing is one example of these tradeoffs. some wag commented that dynamic typing is what you should use when the type-correctness of your program is so difficult to prove that you can't convince a compiler, but also so trivial that you don't need a compiler's help. when you're writing the code, you need to reason about why it doesn't contain type errors. in c, you have to write down your reasoning as part of the program, and in lisp, you don't, but you still have to do it, or your program won't work. when someone comes along to modify your program, they need that reasoning in order to modify the program correctly, and often, in lisp, they have to reconstruct it from scratch
this is also a difficulty in python, but python is much less flexible in other ways than lisp, and this makes it much more comprehensible. despite its rebellion against curly braces, its pop infix syntax enables programmers inculcated into the ways of javascript, java, c#, c, or c++ to grasp large parts of it intuitively. in lisp, the meaning of (f g) depends on context: it can be five characters in a string, a call to the function f with the value g, an assignment of g to a new lexical variable f, a conditional that evaluates to g when f is true, or a list of the two symbols f and g. in python, all of these but the first are written with different syntaxes, so less mental processing is required to distinguish them
in traditional lisp, people tend to use lists a lot more than they should, because you can read and print them. so you end up with things like a list of a cons of two integers, a symbol, and a string, which is why we have functions like cdar and caddr. this kind of thing is not as common nowadays, because in common lisp we have defstruct and clos, and r7rs scheme finally adopted srfi-9 records (and r6rs had its own rather appalling record system), although redefining a record type in the repl is pretty dodgy. but it's still common, and it has the same problem as dynamic typing, only worse, because applying cadr to the wrong kind of list usually isn't even a runtime error, much like accessing a memory location as the wrong type in forth isn't
this kind of thing makes it significantly easier to get productive in an unfamiliar codebase in python or especially c than in lisp
40 years ago lisp was vastly more capable than the alternatives, along many axes. smalltalk was an exception, but smalltalk wasn't really available to most people. both as a programming language and as a development environment, lisp was like technology from the future, but you could use it in 01984. but the alternatives started getting less bad, borrowing lisp's best features one by one, and often adding improvements incompatible with other features of lisp
as 'lightweight languages' like python have proliferated and matured, and as alternative systems languages like c++ and golang have become more expressive, the user base of lisp has been progressively eroded to the hardest of hardcore flexibility enthusiasts, perhaps with the exception of those who gravitate to forth instead. and hardcore flexibility enthusiasts sometimes aren't the easiest people to collaborate with on a codebase, because sometimes that requires them to do things your way rather than having the flexibility to do it their own way. so that's how social factors get into it, from my point of view. i don't think the problem is that people are scratching their own itches, or that they have less incentive to collaborate because they don't need other people's help to get those itches thoroughly scratched; i think the social problem is who the remaining lisp programmers are
there are more technical problems (i find that when i rewrite python code in scheme or common lisp it's not just less readable but also significantly longer) but i don't think they're relevant to octo
you write:
i've thought about this, especially with respect to e-ink screens. some versions of the barnes & noble nook work this way: there's a small color lcd touchscreen for interactive stuff, and a large black-and-white e-paper display for bulk information display. i thought that a small reflective lcd like cheap scientific calculators use would be enough for a line or two of text, permitting rapid interaction without putting excessive demands on the e-paper's slow and power-hungry refresh
but i think that sharp's memory lcds are so much better than e-paper that there's no real benefit to that approach now
it's not that command lines are hard to work with; it's that ed's user interface is designed for 110-baud teletypes, where erasing is impossible, and typing out a 10-character error message means forcing the user to wait for an entire second, so it's preferable to just say '?\r\n'. user interfaces designed for more capable hardware, such as an adm-3a connected over a 1200-baud modem, can be immensely easier to use, without abandoning the command-line paradigm. you'll note that ex(1) is installed by default on every linux distribution except alpine (which does, oddly enough, have an ed(1)), and i know a programmer who still uses it as his preferred editor, preferring it even to vi
i've certainly done plenty of real work with tcsh and bash, where the primary user interface paradigm is editing a single line of text to your satisfaction, submitting it for execution, meditating on the results, and then repeating. not to mention the python repl, gdb's command line, octave, units(1), r, pari/gp, sqlite, psql, swi-prolog, guile, ngspice, etc. i like direct-manipulation interfaces a lot, but they're harder to design, and especially hard to design for complex problem-solving activities. it's just that correcting a typo a few words ago isn't really a complex problem-solving activity, so, by my lights, even a relatively dumb direct-manipulation text editor can beat a command-line editor like ex most of the time. especially if their bandwidth to your eyes is measured in megabits per second rather than kilobits per second
On my system, it seems to be set up as an alias of vim ("Entering Ex mode"). Huh. A bit more reading after that has been an interesting historical detour.
I've never tried ed before this comment. Others might see it as strange and clearly teletype-oriented, but I think it's refreshingly explicit about every action taken. Even more than ex, it's a vim ancestor where commands and state are persistently visible instead of being hidden.
Thank you for mentioning this, but now I'm a little worried I might start wanting to use ed too.
I think you've hit on something important here. Web-oriented workflows with live reload (and Flutter[1]) inherit this. I think it means the form of the REPL itself isn't what we should be focusing on, but easy iteration and maybe cached results to save power and time.
On the topic of accessibility, do you have any opinions on Hedy[2] and whether a long-lived, low-power system might have room for something like it? The site is annoyingly JS-filled, but the idea is interesting: a language where the names in the symbol table are more overtly a user-localizable value.
[1]: https://flutter.dev/
[2]: https://www.hedycode.com/
you ask:
well, i think it's critical for the zorzpad to be able to rebuild its own operating system code, because otherwise it's dependent on external computers to modify its own operating system, and the external computers might have become incompatible or untrustworthy at some point. this poses tricky problems of recovery from bad operating-system updates; probably having multiple redundant microcontrollers in the device is the best option, so that on bricking one of them, i can restore its old configuration using the other
ideally, though, the core code capable of bricking it would be a small tcb, not everything that runs on the system the way it is in ms-dos or templeos. that kind of isolation is usually implemented with an mmu, but the apollo3 doesn't have an mmu, just an mpu. so currently my plan is to implement it with a low-level virtual machine instead; initially an interpreter (my prototype seems to have about 10× interpretive overhead, but probably providing a small number of extra primitives will be adequate to eliminate the worst hotspots from things like updating the display and digital signal processing) but maybe later a simple jit compiler. like most microcontrollers, the apollo3's memory/throughput quotient comes to about a millisecond, rather than the second or so that has been typical for human-interactive computation for the last 60 years, so i suspect i'll be leaning pretty hard to the side of sacrificing speed to reduce memory usage
in addition to providing the protection needed to write experimental code without risk of bricking the machine, the virtual machine can also provide services like virtual memory (including compressed ram), transactional concurrency, and transparent persistence
the nand flash chips i'm using (s34ms01g2) are supposed to consume 27 milliwatts when running, but their power budget at the nominal 1-milliwatt full power is only 300 microwatts. at full speed they're capable of 133 megabytes per second, so at 300 microwatts they might be capable of 1.5 megabytes per second. i can't write to them nearly that fast without wearing them out early, but i can load data from them that fast. in particular that means that the 24 kilobytes of a full screen can be loaded from flash every 16 milliseconds, so you can do 60fps video from flash. reloading all 384k of ram, for example to switch to a completely different workspace, would take a quarter of a second. i feel like this is enough of a quantitative difference from the speed of floppy disks on a commodore 64 or a macintosh 128 that it becomes a qualitative difference, and for applications like the spell corrector example i gave in the other comment, it might be possible to use the flash almost as if it were ram, using the ram as a cache of what's in the flash
but maybe when i gain more experience with the hardware, all that stuff will turn out to have been the wrong approach, and i'll have to go with something else
that would be great! on the other hand, i don't want to count on it; i want it to be a pleasant surprise :)
so, i don't want to be unkind, but i think this is a misconception, or a pair of misconceptions, and i think i know where one of them comes from. i'll try to steelman it before explaining why it's wrong
the first assertion is that counting milliwatts (or actually microwatts in this case) implies that text beats guis. the second assertion is that counting decades implies that text beats guis. i have no idea where the second assertion comes from, so i'll focus on the first
a first plausible justification for the first assertion is that power usage (over leakage) is proportional to the number of toggling bits (or perhaps cubic or quadratic in the number of toggling bits if voltage scaling is an option) and a character generator toggles many fewer bits than all the computation needed to draw a gui, so applications written for a character-cell text terminal can use much less power than gui applications. a second plausible justification is that we have many examples of applications written for character-cell text terminals which use much less computation (and therefore power) than any existing gui alternative
the first justification has two problems: first, it's comparing solutions to the wrong problem, and second, character-cell text terminals aren't even the best solution to that problem; they're the best solution to a significantly different problem which we don't have
the problem that the first justification is trying to solve is the problem of generating a video signal that is useful according to the goals of the application. in that context, the character generator is running all the time as the beam refreshes the phosphors on the crt over and over again. it is true that the character generator can compute the pixels to send to the crt much faster and with less transistors than a cpu can. but in the zorzpad, the display itself is bistable, like the plato gas plasma terminals from the 01960s. there's a flip-flop printed on the glass in amorphous silicon next to each pixel. so we don't have to consume any computational power refreshing a static display (though we do have to reverse the polarity to it a few times a second to prevent gradual damage to the lcd). we only have to send it updated lines of 400 bits. we don't have to generate a continuous video signal
moreover, the fact that a character generator would toggle less bits than the cpu to generate a line of pixels is somewhat irrelevant, because ambiq doesn't sell a character generator chip, and a character generator implemented in conventional cmos from another vendor (or on an fpga) would use orders of magnitude more power than the subthreshold-logic ambiq microcontroller, so we'd have to do the character generation process on the cpu anyway. at that point, and especially considering that the memory-in-pixel lcd display hardware doesn't impose any timing constraints on us the way an ntsc or vga signal does, there's no extra cost to generating those lines in a more flexible way. even without a framebuffer, generating lines of pixels on demand, we could support glyphs positioned with pixel precision rather than character-cell precision, proportional fonts, multiple sizes, polygons, rectangles, stipple patterns, overlaid sprites, and so on
also, though, generating a line of pixels or a vga video signal from a framebuffer requires less bit toggling than generating it from a character generator. with the framebuffer you literally just read sequential words from ram and send them to the display with no processing. what character generators save isn't power; it's memory. with a character generator you can drive a 1280×1024 display with an 8×16 font (256 characters, say, so 4096 bytes of font memory) displaying 64 lines of 160 columns, requiring 10240 bytes of glyph indices, for a total of 14336 bytes of memory. for the traditional 80×25 you only need 2k of modifiable memory and a font rom. and you can do it with strictly sequential memory access (many early character-cell text terminals used delay-line memory or shift-register memory) and meeting the strict hard-real-time guarantees required by the crt hardware being driven
in this case, 400×240 1-bit pixels would require 12000 bytes of framebuffer per screen, and 24k is about 6% of the apollo3's total 384k of sram. in theory the apollo3 needs about 2.1 cycles per 32-bit word to copy data into the framebuffer without rotating it, a number i haven't tested, so 12600 cycles to redraw both screens from scratch; at 48 megahertz that's about 260 microseconds, so it doesn't use much of the chip's available horsepower even at submilliwatt power levels. the display datasheet only guarantees being able to draw two megapixels per second, so redrawing the screen should take 48000 microseconds, although others have reported that it works reliably at three times that speed. hopefully i can use the cpu's built-in spi peripheral to spew out buffered spi data to the screen while the cpu goes back into deep-sleep mode. if i use a character generator implemented in software to avoid having a framebuffer, i have to wake up the cpu periodically to generate new lines of data, using more power, not less. (which might be worth it for the memory savings, we'll see.)
even without enough memory for a framebuffer, though, character generators weren't the only way to generate hard-real-time video signals. the nintendo (nes) only had 2k+256 bytes of vram, for example, and you could do pretty elaborate graphics with that. it used tiles (which are sort of similar to font glyphs and can in fact be used for font glyphs) and sprites (which aren't), plus scrolling registers. some other hardware supported display lists of straight lines instead of or in addition to glyph indices
the second justification is basically a cohort confounder. yes, vs code uses a lot more cpu than vim. but vim was originally written for the amiga, and vs code is written in electron. applications that run in character-cell text terminals do empirically tend to be a lot faster than gui applications, but that's largely because they're older, and so they had to be written with more attention to efficiency. remember that geos provided a standard wimp interface on the commodore 64, with mouse pointers, sliders, pulldown menus, proportional fonts, overlapping windows, and so on, and all that in color without a color framebuffer, adding extra work. it was slow, especially when it had to load data from the floppy drive, but it was usable. the commodore 64 is about 0.02 dhrystone mips (https://netlib.org/performance/html/dhrystone.data.col0.html) and the apollo3 is about 60, so running something like geos on it at 10× the commodore speed ought to use about .3% of its full-speed cpu. or 3% if it's running in a simple bytecode interpreter
so that's why i think submilliwatt computing doesn't require abjuring the gui
does that make sense? have i just totally misunderstood you?
Awesome art project. Where did you get 53 years from? I think, for longevity as a design criteria, including the documentation including design files is the best and most significant strategy.
Mechanical: machine everything out of solid metal, over-spec the springs, seals and potential ingress point precision.
Electronics: fat traces, replacement-optimized through hole components, socket-mounted chips, over-rated everything, passive cooling, conformal coating, arrays of smaller components in preference to single large components.
Fasteners: easy grip broad head twistable screws only, potentially to double as stands, this ensures accessibility until fingers become obsolete.
Software: ROM base system with physical restore function, dual ROM with mutual checksum if required, RAM for everything else.
Digital storage/retrieval: Solid state only.
Physical output: Optional low-longevity RS232 receipt printer for retro effect.
Power: Solar + supercaps and/or a corded pull-out analog watch-style shake-to-charge assembly and/or a thermal charger utilizing thermals from combustion and/or a pneumatic charger utilizing common compressed air interfaces as a power source.
thanks! your ideas are mostly pretty good
i agree that the machine should be fully self-describing, like darius bacon's 'book in itself'. of course, that won't help you if it won't turn on...
rather than just conformal coating and seals, i'm thinking i'll not just conformally coat the boards, but also pot the whole assembly in something like silicone or toilet-ring wax, so that the potting material can be removed, but otherwise it's trivially ip68. except for the keyswitches, but the keyswitches can be a magnetic type that doesn't have any exposed electrical contacts or any springs
solar-powered devices generally don't have to worry too much about cooling, though. or if they do it's from the heat they passively absorb from the sun, not from anything they dissipate internally. if your solar panels are 10% efficient (the best you can get out of amorphous silicon, which is the only kind that works under indoor lighting) and cover 30% of your device, they only harness 3% of the illuminance, which is later converted to heat in the circuitry. the other 97% is either reflected or converted into heat immediately. so the heat produced internally is only about 5% of the heat it has to deal with. now think about how much heat you feel on your skin from indoor illumination; we're talking about 5% of that
socket-mounted chips tend to induce a lot of unreliability, so i don't favor them. desoldering surface-mount chips is a bit of a pain but still done routinely in cellphone repair shops around the world. but i want it to make it to its design lifetime without replacing components, and one reason for that is that i don't want to depend on the economy. (consider what components were commonplace in 01971 and how many of them are hard to obtain nowadays — and that was 53 years of relative peacetime. imagine trying to get parts today in moscow.)
unlike batteries, chips do have a long shelf life, so you could conceivably stockpile chips today for future repairs. but if you're going to do that, probably the best place to stockpile them is on circuit boards inside the device, so you don't have to solder anything to fail over to them. that's not applicable for every device, but fortunately things like voltage regulators have very low failure rates, and you can still probably build in redundancy at a higher level. that's what we did when i worked on satellite systems: of n identical systems onboard, we only needed one to survive to run the satellite. there were even multiple power buses, so that even a failure that shorted a voltage rail to ground would only disable part of the satellite
as i understand it (not having taken the measurements yet myself) over-rating is especially important for bypass capacitors in this context — not because they're prone to failure (unless you're using tantalums) but because capacitors have enormously higher leakage near their rated voltage
if you want to build such a thing and have a mechanical charging option, i think a pullstring like 20th-century children's windup toys is the best option. it's compact, the spring constrains how much force you can apply to the generator and runs it at a predictable speed, and the pullstring grommet constrains the direction of force applied to the mechanism so that kofi annan can't break it by applying side-loading to your crank handle like he did with olpc. but i don't think electromechanical generators are likely to fit into the reliability budget
for screws, i think protruding screws would catch on my pockets. but there are lots of ways to fasten things together in reopenable ways, especially if you don't need seals. if you did want to use conventional milled-head screws, well, i filed a flathead screwdriver out of a bolt a few months ago. i used a file, a vise, and a steel bolt. but exposed screws would tend to rust unless you made them out of titanium or something, which isn't an option for me
a metal case has advantages and disadvantages. being able to take input power inductively (qi style) or communicate over rf could be useful, which you can't do with an all-metal case. metals are not very chemically stable, except for gold, silver, platinum, iridium, palladium, lead, nickel, aluminum, titanium, tin, and chromium. most of these have major disadvantages of their own, though titanium would be a pretty good option. many plastics are very chemically stable, including polypropylene, polyester terephthalate, silicones, and epoxy resins, and the epoxies in particular can be glass-fiber reinforced up to steel-like strengths and stiffnesses. so that was my plan
why 53? i just like 53
Pocketable wasn't in my mental spec, agree this is a good idea. You can also inset them.
Marketing a portable as Kofi-proof is a neat marketing angle ;)
Depending on your radio frequency perhaps you could put your RF behind your screen to get signal out, or use a slot as the antenna, or have a fold-out or telescopic antenna, or have a screw-on antenna port exposed when you open the thing, or some combination thereof.
Aluminium's fine for cases, cheap, light, castable, extrudable, cheaply machined, and readily anodized.
Steel won't rust for ages if it's of a decent chemistry and either treated and/or oil is used when the thread is mated. Even steel that appears heavily rusted after many decades of total abuse can be restored quite easily to a good state in many cases. I mean people had multi hundred year old wooden structures ... I don't think an embedded steel thread with an oil surface coating is going to disappear overnight. As a learning project I recently restored a 1980s drill press, that's basically 40 years old which is ~most of your target length, it had apparently been left with exposure to moisture for 20+ years, and was mostly intact. The net result of the learning was it's not worth restoring old drill presses, but I sure learned a lot!
For maximum chip shelf life, you want to store them in nitrogen or some similar inert gas.
With Wikipedia and survivalist content modules you could market this to the prepper crowd. Of course long distance radio would be an add-on. As would the radio direction finding hardware, encryption, etc.
I still like the pneumatic idea. 100% of existing bicycle pump hardware becomes your viable charging interface. Speaking of which, a low profile wheel based generator would be a good add-on module.
so, with respect to steel, i think a steel case in a sweaty pocket will rust rather quickly and comprehensively, even if it's a good steel. often alloying elements that improve steel's mechanical properties make it more prone to rust rather than less so
i'm guessing the drill press was not in working condition after the 20 years of moisture, and i'm assuming you mean something like 'in a basement with relative humidity that reached 100%' or 'with the bottom resting on moist soil' and not 'under dripping water for 20 years' or 'underwater for 20 years'
i don't really have a good way to machine steel anyway, and it's inconveniently heavy. aluminum is amenable to wood rasps and drills
i think probably embedded in silicone or paraffin wax or something would be better than just stored in nitrogen. gottta be careful of static buildup tho
while i sympathize with preppers in some ways (i, too, value autonomy) i think they might be a hard group to market to. a friend of mine occasionally reads survivalist board forum and there's a lot of toxic ultra-right-wing stuff there; i don't want to be lynched for not supporting trump
you can of course do secure encryption on any general-purpose computer unless you're trying to defend against side-channel attacks. i ran pgp on a 286, and we have more efficient algorithms now
long-distance radio is potentially interesting but i agree that it's probably a different piece of hardware; short wavelengths can only go short distances unless they're bouncing off the moon or something, and you need a long antenna to transmit long wavelengths efficiently, although efficiency isn't a significant concern for reception (at transcontinental-capable wavelengths radio noise is a bigger concern than noise generated inside your amplifier). so for long-distance radio you probably need a large, fixed antenna installation rather than a pocket computer
my concern with pneumatic-to-electric power is that it's probably hard to make reliable. but i don't really know
For the radio, ask a HAM. I'm sure they can workshop something out of nothing and turn a clothes line in to a global command center... anyway yes, the size of long distance antennae will prohibit having them internal. One could include an SDR module but why not make it an add-on?
On pneumatics, https://en.wikipedia.org/wiki/Compressed-air_energy_storage suggests 50-70% efficiency at utility scale, with China leading. https://www.csiro.au/en/research/technology-space/energy/Ene... suggests 65% with 75% assumed aspirationally feasible. https://www.resilience.org/stories/2018-05-18/ditch-the-batt... suggests 75-85% but is likely non-holistic. https://www.signal.it/en/products/case-history-pneumopower/ suggests a small-scale implementation has been successfully productized. Indicatively at <=7 bar pressure they achieved post-loss output of DC24V@12W with a ~300g device. Since only a fraction of this is required, it should be possible to drop to ~50g with some loss of efficiency. Tiny motors are inexpensive as numerous factories compete for business, but they can fail so a replaceable setup external to the body would be preferable.
Since you only need a very small amount of current, something like a small compressed air canister could then presumably store many months worth of power. It would need to activate the air source for recharging only sporadically. This could be done using physical input from the operator and an NC-type valve in order to sidestep the flat-start problem, with a parallel option for electronic actuation.
Perhaps steampunk social networking would be a better market. One imagines darkly clad figures sneaking around urban environments clandestinely sneaking power top-ups from from random vehicle tyres...
i don't think it matters whether the efficiency is 100% or 1%; as it happens, i just pumped up my bike tire from dead flat earlier tonight, and i did on the order of 2 kilojoules of work with a hand pump to do that. 1% of that would be 20 joules, which would be 20000 seconds of full-speed 1-milliwatt zorzpad usage
and conceivably the pneumatic storage option could help to solve the problem of 'where do you store those joules until you're ready to use them?' this is a problem at the electrical level because batteries and electrolytic capacitors are not long-lived, and non-electrolytic capacitors max out at about 1 microfarad for ferroelectric mlccs. 10 volts at a microfarad is 50 microjoules, so storing an entire joule in long-lived capacitors would require about 20000 capacitors
but you still have the problem of how to convert the pneumatically stored energy into electrical energy at milliwatt rates (plus or minus a few microjoules, anyway) with decades of mtbf
if you're willing to require manual labor to pump up the palmtop with energy, though, a pullstring can store the energy in a steel spring, which can easily hold several joules and parcel them out at milliwatt rates to an electromechanical generator, with efficiencies in the neighborhood of 90% if you care about that. maybe you could even make it reliable, but it isn't obvious how. (grossly overspecced gears on bronze oilite bushings, maybe, driving a grossly overspecced generator?) a single heavy pull on a bowstring routinely stores around 100 joules, so i think a pullstring could be competitive
a manually powered computer like this could harness more energy at night than the purely-solar-powered zorzpad can. in the daytime, though, the sun provides more energy than you can comfortably provide by hand. consider a 150×90 pocket-sized notebook; in full sunlight it's collecting 20 watts, the equivalent of that heavy bowstring pull every 5 seconds. if only 30% of the notebook is covered in solar panels and they're only 10% efficient, you're still collecting 600 milliwatts, the equivalent of that heavy bowstring pull every 2½ minutes
oh, i don't know why i didn't answer the stuff about steel and preppers and pumps; maybe it was a brain fart or maybe you edited it in later. i'll try to answer later
Yeah it was a few edits. I often stay on a theme awhile once consciously captured, resulting in a re-read and new ideas/tweaks. Ninja skill: trainable psycho-inertia, AKA "focus", now perhaps ostracized as a 'spectrum member' activity, I believe it was merely associated with clarity of thinking and general education in the 19th and early 20th centuries.
those are good ideas about antennas
i hadn't actually considered machining it out of an aluminum billet. you might be right that it would be adequately stiff, even if it's much less stiff than glass-fiber-reinforced epoxy. it's much heavier than epoxy, but only slightly heavier than glass, and of course most of its grades are enormously less brittle
I wonder what's the point to run.batteryless? With such a small power budget, the device should accumulate energy when it can harvest more than it needs to consume, and use it when the lighting conditions deteriorate. It would only take a really small battery, but would seriously increase usefulness.
i've discussed the reliability problems of batteries further in https://news.ycombinator.com/item?id=40805573, but, as an intuition pump, consider that when cellphone repair shops advertise particular repairs, the particular repairs they advertise most are battery replacement, charging port replacement, and broken screen replacement
So use a battery of a standard form factor and make it easy to replace. E.g. one AAA battery per device is easiest. Most coin-sized lithium batteries are also rechargeable, explicitly or implicitly.
What do you mean by implicitly? Last time I checked, most of button/coin-like batteries (CR2032 and the such) aren't know to be rechargeable, unless you specifically get rechargeable ones.
you don't seem to have really understood my points, because you aren't engaging with them; as it happens, you're also mistaken about coin-sized lithium batteries (though any battery can be slightly recharged)
My understanding is that eink only uses power on refresh; if you don’t change the image, the pigments remain where they are and draw no power.
yes, that's right. so there's a refresh rate crossover point at which e-ink actually uses less power. my calculations from the very uncertain information i have is that it's around 20 minutes. that is, if you update the display once every five minutes, the e-ink display will use significantly more power than the memory-in-lcd display updating 60 times a second. there are real limits to the utility of a computer that needs several minutes to redraw its display; though i wouldn't venture to say that it's useless, you can't do anything similar to a conventional gui on it
The hisense e-ink phones are known to have week long battery lives (A5 and A9) so I'm thinking not all e-ink is the same. I know there's e-ink nerds out there and forums dedicated to it but I don't actually know the different types (here's an overview I think: https://en.wikipedia.org/wiki/Electronic_paper). Maybe the good stuff is harder to find
the amazon swindle is known for a month-long battery life, so a week is easily believable. but you're talking about how to solve a different problem
if batteries are an option, e-ink is a super-low-power option because it uses 100 milliwatts or maybe 10 milliwatts at a cellphone size, while a conventional backlit color lcd uses 1000. but to run off solar panels, indoors, my power budget for the whole zorzpad is 1 milliwatt, and the screen can only have a fraction of that
Have you given much though to the idea that parts which are inexpensive and shelf stable could just be bulked up? Long lifetime won't save you from parts that just get damages from wear and tear.
For something like a keyboard, it might be better to have a box of spares, rather than trying to build for 53 years of use and abuse... just as an example.
yeah, it might be a reasonable choice for mechanical parts like keyswitches and hinges. i think keyswitches with a wearout life of a billion cycles keyswitches isn't infeasible either, but testing them to verify the cycle life could be challenging
TRS-80 Model 100 seriously fits the bill. Basic, word processor and spreadsheet in ROM, pixel addressable display, fabulous keyboard for its size. RS-232 and 300bps modem. The recent Clockwork-PI evokes it - but the keyboard is a toy and then it’s just a Linux laptop with all the distractions therein. The Model 100 was carried by journalists well into the 90s, to compose and upload stories to service bureaus.
I had one and in a moment of greed sold it :-( What I loved about it was:
* Used regular batteries, no charging, easily obtainable, lasts quite a bit
* Keyboard was awesome
* Retro look & feel
* Size
What I didn't like:
* 8 line display is tiny!
* Would prefer a bit smaller
* Getting data off is a pain (ether are many workarounds but you have to fiddle with it)
So, assuming a $200 price point and using the 5x rule of thumb for HW products, BOM should be $40. Problem is eInk displays are kind of expensive. Putting $15-ish for a 7-inch+ display, rest seems bearable doable. (I've never developed a consumer HW product, so these are wild guesses.)
What's the point of e-ink? I's good when the picture changes once in a few hours, sucks for word processing.
A monochrome transreflective LCD (without a backlight) would consume fractions of a milliwatt, can be a high resolution graphical display or character-based display, and would look plenty sharp under direct sunlight. Also widely available and inexpensive.
It can't have such a high contrast as e-ink though.
Yeah, best compromise is a single row LCD display for the "live", "editing" line of text and then eInk for the rows of text above. Of course scrolling the eInk is still slow.
the memory-in-pixel lcd is better than that approach in every way except that it's more expensive, there's no grayscale, and the display panels don't come in large sizes
typically monochrome transflective lcds consume quite a bit more than fractions of a milliwatt, though it does depend on how big they are. the two-line reflective lcds commonly found on pocket calculators do indeed consume a fraction of a milliwatt, but power consumption scales mostly with display size
e-ink is fast enough to be usable for word processing because the updates are small and incremental and, most importantly, can tolerate some ghosting
TL;DR: The TRS-80 Model 100[1] indeed looks closer than the AlphaSmart!
If something like that TRS had e-ink, it might be what I'd had in mind. If you're familiar with the idea of the OLPC hardware[2], like that except:
* More durable
* Lower power
* E-ink
* Not effectively vaporware
On a related note, I heard Pixel Qi[3] displays were interesting. However, the company folded. Some panels seem to pop up for sale now and then, but my understanding is they're effectively obsolete.
That's a shame.
After taking a look at the device, this seems like an understatement. The Clockwork-PI[4] looks like it's mostly an emulation machine. For anyone who doesn't want to click through, it's a Game Boy form factor with a few extra buttons and a keyboard below the D-pad.
To be fair to the device, I think it's different class of device. It also fills some very specific niches:
1. Prototyping for GB/GBC games on higher-power hardware and higher-level languages
2. A decent front-end attached sensors if you can use USB
As example of #2: Mapping WiFi signal strength. Walking around a building holding a clunky laptop while someone else held the antenna wasn't fun. It was what we had to work with given the deadline, but if I had to do it often, I might want something like the Clockwork PI if someone could source better keyboards.
[1]: https://en.wikipedia.org/wiki/TRS-80_Model_100
[2]: https://en.wikipedia.org/wiki/OLPC_XO
[3]: https://en.wikipedia.org/wiki/Pixel_Qi
[4]: https://www.clockworkpi.com/
Just to be clear - I was referring to what they call the “Devterm”[1] with the wide screen and toy keyboard. They even sell it in a color scheme that matches the Model 100 - and it has a built in printer like the Epson (? Something-20) if that era.
[1] https://www.clockworkpi.com/devterm
I always loved the simplicity and form-factor of the Model 100. For a "practical" modern version, the tricky bit is getting data on and off. Wifi/ethernet would definitely replace the modem, but for comms ssh/scp isn't that convenient for folks looking for something "simple". Personally, some combination of Dropbox-esque file sharing and SMB disk mounting strike me as practical.
I doubt that there's anything like a commercial product here, but I find it a really fascinating design space.
I would plunk down $199 right now for a basic version of this (more with extra features, e.g. WiFi, expansion lot, etc). My key use case would be using in on planes, esp. when the jerk in the from title their seat all the way back.
Take my money. I want a little thing like that to carry in my bag for writing when the inspiration strikes. I’d be perfectly ok with it being append-only as long as it could easily dump data back to my laptop for real editing later on.
this is one of my key use cases for the zorzpad, but i want to be able to program it, because often what i'm writing are algorithms
I’m torn. One hand: Yeah! Other: I don’t need more convenient distractions at hand. Programmability is an attractive nuisance for me.
yeah, i have that problem too :(
WiFi? What? How would you even use that?
Even just an e ink alphasmart would be killer — the key feature for me was the Mac keyboard + shortcuts so I never had to think while using it.
TL;DR: Freewrite[1] tries but it isn't the same
They are selling a line of professionally oriented devices. I'm not sure they're worth it.
* The "Alpha" model[2] nods to the AlphaSmart form factor, but it isn't e-ink
* Weird-looking keyboard layouts with no Apple-style key in sight
* These devices are expensive for what they seem to offer
I've never used one so I can't really speak about more than what I've seen on the site. There were some negative reviews of earlier models. I'm not sure how current ones rate, nor what their keyboard feels like.
[1]: https://getfreewrite.com/
[2]: https://getfreewrite.com/products/alpha
An e-ink with an HDMI input would be fantastic.
Dasung makes e-ink monitors, with an HDMI input, such as the Dasung Paperlike.
I'd say the Cardputuer is closest to what you're asking for.
https://docs.m5stack.com/en/core/Cardputer
I use an Alphasmart Neo for journaling. Recently modded it with an ESP32 for wireless file transfer. I love it!
https://www.dannysalzman.com/2024/06/20/modding-alphasmart-n...