Rob's Blog rss feed old livejournal twitter

2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2002

December 31, 2018

I've been learning way more about the FAT filesystem format than I necessarily wanted to know. I've dug into both wikipedia articles on it, Linux's fs/fat code, Microsoft's 18 year old Word document on it (on the theory any patents in that should have expired by now), and of course the newfs_msdos source that Android's currently using for this. (I also looked at the digital camera spec and EFI bios spec that _claim_ to have FAT as part of the standard, but neither actually talks about binary on-disk layout. It's just hand-waving "everybody knows" nonsense.)

What I want to do is write a mkfs.vfat and fsck.vfat, a genvfatfs archiver, and maybe some mtools-style manipulation utilities. (Not sure which: mtype, mcopy, mdel, mdir, mmkdir, mrmdir, and mmove I guess? I don't _really_ want to do mdeltree and mxcopy, is adding a -r flag to mcopy and mdel cheating? Or should I do mcp/mrm/mcat/mmv with Linux syntax? Hmmm...)

December 28, 2018

I've been poking at mkfs.vfat. It's a 512 byte boot sector, two instances of a large bitfield, and a tree of linked lists. There are three format variants: fat12, fat16, and fat32.

The 12 bit version is obsolete (it was used by floppy disks, but maxes out at 4084 clusters (the first two clusters contain the boot sector and allocation table, and cluster value 0xFF0 is the end of file marker) for a capacity a little under 2 megabytes. (You can use clusters bigger than the original 512 byte sector size to stretch it out to 256 megs with a 64k cluster size, but there's not much point.)

DOS 3.0 introduced FAT16, which can do 65524 clusters (if they're 64k each that's 4 gigs), and then there's fat32 (which is icky, but not _that_ icky).

The old CP/M BIOS (which DOS copied) used to load the first sector from the disk/partition and jump to the start of it, so for historical reasons the FAT boot sector starts with a 3 byte x86 jump instruction, the 3 bytes being 0x3b, 0x3c, 0x90. On android, with an arm processor, this is useless, but might be used for partition identification.

FAT is a legacy partition format that does no flash wear leveling whatsoever, certain sectors are rewritten with _every_ write to the disk. USB flash sticks detect when they've got a FAT filesystem on them and do their own wear leveling mitigation, but what attributes they use to determine "yup, this is a FAT filesystem" is ad-hoc and variable. So it's best to keep things like those initial 3 bytes constant whether or not they're "needed", just so we burn out fewer USB sticks.

After the 3 magic bytes, there's an 8 byte OEM string that can be anything (I'm sticking "toybox" in there, and two NUL bytes). Next up is the "bios parameter block", which starts with a 2 byte little endian "bytes per sector" value, which is probably always 512. (Note: since the 3 byte jump instruction was an odd number and the OEM was 8 bytes, this unsigned short starts at an odd address! I.E. it's unaligned, so we can't just typecast and read it. Luckily, I have peek() and poke() functions for this sort of thing alrady.)

December 23, 2018

Listening to NPR's hidden brain podcast interview David Graeber about his new book Bullshit Jobs, and the word they're looking for when they describe an unnecessary exective "staff" is an "entourage". A group of people there just to make some important person look good. It's common enough there's been a word for it since feudal times.

Billionaires justifying themselves as "job creators" is a second order entourage. They're trying to make themselves look important. But they're not creators, they're gatekeepers. The need exists, the work exists, they're just standing between people willing to do it and those who need it done, collecting a toll.

Graeber mentions how BS jobs are often parasitic: they slow other people down by demanding communication overhead, which is Brooks' Law straight out of the Mythical Man-Month. He cites studies indicating 37% of workers self-report that their jobs are BS (although how much of that is "your current job is" and "you had one at one point in your career" is kinda glossed over and could use some follow-up: I've had at least 3 BS jobs over the past 20 years, I let those contracts expire and went on to do something else).

But the more interesting question is how many jobs are in service to BS? If you've got a couple floors of marketing consultans, how many floors under them are their human resources departments, billing and payroll... So now it's six or seven floors of people. Janitors, eletricians, cafeteria staff, bus drivers, none of whom would need to be there if the marketing consultants didn't show up to that building every day. Second order BS jobs are at _least_ as many people, and _that_ says if the 37% is right, then the vast majority of ALL jobs in this country are probably BS.

Which makes sense if Keynes was accurately predicting that by the year 2000 we'd have a 15 hour work week: 15 hours in a 40 hour week is 3/8 useful and 5/8 BS. And Keynes prediction was published in 1930 about the year 2000, which was almost 19 years ago. If we went from 40 to 15 in 70 years, another 20 would put us closer to 6/8 by now.

Think about how heavily subsidized and artificial the food industry is in this country, for example. The parts of it that _do_ work are basically government controlled now anyway: farm subsidies, FDA quality control, USDA meat inspection, food stamps (used to be government cheese but republicans financialized it so they do money and not stuff; it's easier to steal money, much harder to launder the contents of a grain silo into payments to a mistress). We've been paying people _not_ to farm for decades. (And then the actual harvest is done by illegal immigrants because the "farmers" are incompetent. They don't _do_ farming, they _own_ farms. They're slum lords plants, the actual work is all outsourced.

In prison, sitting in your cell with nothing to do is explicitly a punishment. Having to sit in your cubicle and do nothing is also punishment. (Imagine you have to take an international flight. You sit in the same chair for 8 hours. The in-flight entertainment is broken. Due to TSA crackdowns you get no electronic devices or books. You have to sit there, for 8 hours. Now imagine this is your job, every day.)

Richard Nixon introduced a basic income bill called the Family Assistance Program in 1969, and it passed the House. (Democrats killed it in the senate for not being big enough.) Why does overton window no longer cover it? Because of lobbying by rich people who want to eliminate the POSSIBILITY of you not working for them. If you don't work for their shell corporations, at whatever pittance and in whatever conditions they choose, you will literally die. You will starve, freeze, can't get medical or dental care. None of these things are particularly expensive in other countries, but here housing and medical care are bid up as high as they go, and only available through gatekeepers (your credit score, group health insurance).

During World War II the top income tax rate was raised to 91%, and the corporate rate to 50%. They stayed there for 20 years, and back then CEOs led very different lifestyles than they do today. Unfortunately LBJ lowered the top tax rate to 70% in the Revenue Act of 1964 (attempting to maximize government revenue), then Reagan lowered it to 28%, creating our national debt _and_ the problem of "the 1%" dominating the american economy, bullying everyone else, and buying laws. (The graph of the government's debt and the graph of money billionaires have are mirror images of each other. Republican embezzlement caused these problems.)

Lowering taxes is BAD for the economy. The entire "postwar boom" period was heavily taxed. Kansas lowered its taxes and the state's economy crashed, california raised taxes and its economy boomed, and there's a REASON. Taxes make coporations invest the money they don't get to keep: if you get to spend pretax money on R&D or worker training, it's a lot more appealing. Lowering corporate taxes switches incentive to hoarding, squeezing every last dime out of the workers and pocketing it.

The "dumbest idea ever" (that corporations have an obligation to maximize profits for their shareholders) was invented by Milton Friedman in the 1970's. It does not predate the Boomers, and can die with them.

December 22, 2018

Adding --color to grep is a somewhat intrustive redo of the -o logic, and as long as I was there I went through the grep_test list and fixed the things that were broken, some of which implied other corner cases I should test. This means I finally broke down and made "grep -e one -e two" not boil down internally to grep -e "one|two", because when you're _not_ using an extended regex there's no portable way to do that. (It logically should be \| but that didn't work in musl and I haven't even tried bionic. Oh, and Rich was made that \1 could be wrong because the glued together versions would have the same set of numbers for parentheticals and each -e should have its own numbers, but I was upset about having to traverse a long string multiple times and thrash the cache instead of one regex looking for complex but cache local things...)

So now I've got a for loop assembling a linked list of struct regex_t, and another traversing that list and filling out regmatch_t, adjusting them as we advance through a previous match and figuring out which is the "best" match, and so on. Complexity I wanted to avoid, but this would appear to be the portable way to do it...

And then I hit -w. Specifically, echo | grep -w "" matches the blank line, echo "hello" | grep -w "" doesn't match, and echo "one two" | grep -w "" (with _two_ spaces) matches again. Did I mentinon posix hasn't got -w?

Hmmm, my grep code splits patterns at \n, on the theory it's reading a line at a time so we can't include \n _in_ a pattern, which doesn't come up with normal patterns but DOES with null terminated input lines...

December 20, 2018

Travel day, Fade and I flying to Austin, with dog and waaaaay too much luggage. The "I hurt myself" load dragged out of Milwaukee has been broken down into smaller chunks, and Fade and I each are taking 2 suitcases (which southwest lets you do free).

Multiple times I've written a long email reply to something, gone "No, I shouldn't send that", decided it's better on my blog (it's off-topic in whatever thread it is), left the window open to copy it there later... And then my laptop battery dies (this one's terrible), and thunderbird of course doesn't save anything as a draft anymore. Dunno why, that broke a while ago.

December 19, 2018

Elliott and I went back and forth on the list about strace and sort -V. Still trying to get grep --color finished. Todo list gets longer even while I'm taking time off to work on it.

December 18, 2018

Broke down and tried to edit the wikipedia[citation needed] page for public domain equivalent licenses, since the 0BSD section is still hilariously wrong a month after I pointed out to them that it was wrong. But the McDonalds wifi I'm working from is banned from editing for 48 hours due to troll edits. (The universe telling me not to do that.)

I got grep passing all the tests in the test suite, and then added another test and made it pass it. I should do a thorough test suite scrub at some point and make sure there's a test for every corner case I noticed and implemented in the design, but that's "review as if it was in pending" level of polishing analysis/focus, and I don't have the spare time/effort right now. Gotta do stuff. Android isn't using toybox grep yet because it doesn't support --color, so I need to add that. (When did I last open this can of worms? It's probably been a year.)

Elliott pointed out that strace is of interest, and somebody asked for sudo on github a few days ago. It's all in the roadmap, it's a qustion of prioritization.

December 17, 2018

Lots of naps and laying prone. My back is now... less unhappy.

Carrying GIANT HEAVY BAG yesterday (you can fit an awful lot in a zipped up sleeping bag!) was kind of stupid. I should have thrown out more stuff, I just didn't have time to sort. My fault for leaving my decision to the last minute. As usual...

December 16, 2018

Cleaned out the apartment and told them I was moving out. Mailed boxes on saturday, threw out a lot of stuff I wanted to keep on Sunday, eventually got it all down to an overfull backpack, and overstuffed suitcase, and a zipped-up sleeping bag full of clothes and such, which was heavy enough I could only carry it about 100 or so feet at a time and which left bruises on my shoulder.

At Fade's. My back is unhappy with me. But I have closure, with the _option_ to come back in January but not the _requirement_ looming over me. I can relax.

December 14, 2018

Stressed to the point of chest pain and shortness of breath, and not sleeping particularly well. Lack of closure, gotta make a decision.

Ok: Bumped my bus ticket back to sunday so I can clean out the apartment. Dunno if I'm coming back in january or not, but I don't want to feel pressured into coming back just to clean out the apartment if I'm not recovered enough after a month to resume work in january. It's been a year, If I come back I can get a new apartment.

December 13, 2018

Wikipedia[citation needed] still hasn't fixed their mention of 0BSD even though I informed them that OSI corrected itself a month ago. Please stop deadnaming my license.

December 11, 2018

I got email from system76 about the new "meerkat" laptop, which... I feel bad about never buying another system from them again, but the second one is even bigger than the first one and I don't have much use for a laptop designed to stop a charging rhinocerous with a single blow to the snout.

I'm aware that companies have returns policies so a single bad experience _doesn't_ lose them customers permanently, but it was my fault for not realizing what the dimensions they listed on their website would translate to in reality, so I'm not going to ask them to eat a $2k sale because of that. And yet it also means I'm never buying anothing from then again unless maybe I can stop by their headquarters in person (Colorado?) and see the thing before buying, because I don't trust anything they sell not to require me to employ a laptop caddy to carry it around for me (presumably banging two empty halves of a coconut together).

The attraction of the "meerkat" is supposedly that it's smaller, which is damning with faint praise. Having having been burned twice already, I don't believe them. I could really use a replacement netbook because the 2013 laptop I'm using (also from them!) is waaaay too big (why it was a spare system for so long) and the battery's old, and between the two of them I don't bother taking it with me to lunch because 15 minutes work at a fast food table just isn't worth it with this thing.

(Sigh, this is the same kind of catch-22 dating in my 20's was like. "If I wasn't me, or if I don't work the way I do, I'd know exactly how to do this...")

December 9, 2018

Worked out the grep -bB thing: I can realloc()! Which most of the time isn't going to copy the data: there's usually extra space on the end because getline() is increasig the buffer in chunks as it reads. (I can't _rely_ on that, but it's the common case.) And I don't have to strlen because getline() already returned the length of the string, which was saved in a variable I can use.

Next up is making embedded NUL bytes work, which I did the hard part of already for sed (the old ghostwheel() function back when I saw "pattern" enough times I threw a "logrus" in as self-defense, and my naming convention sort of spiraled from there. Since cleaned up: it's regexec0() now).

So it was _matching_ after one or more NUL bytes in a string (you can't have a NUL byte in the pattern without rewriting the libc regex stuff, but the use case here is grepping an ELF binary or similar), but the outpt was displayed with printf(), which is going to stop at the first NUL. So it would show a line... but not show the match when it was after the first NUL byte (except maybe for -o).

The fix is to use fwrite() intead of printf(), but the problem is the length of the match was "-1" when we wanted to show the whole line, and even if strlen() wasn't expensive you can't measure the length of a line with a NUL byte in it after the fact.

Luckily, I know the length of the lines (returned from getline()), and just did the realloc() thing to stick an offset at the end of the prefixed lines, so I can make that unconditional (not just for -b), add 8 bytes instead of 4, and stick a length after the offset.

And it's in. I've got one more failing test (multiple patterns, one of them blank? What's going on here?) But hopefully soon I can chmod +x grep and thus add this test to the "make tests" regression test set once it all passes. (Where a failure means I broke something, so it can't fail normally.)

December 8, 2018

The MacOS stuff continues, but doesn't need a whole lot of input from me at the moment. I got mktemp fixed up again, and now I'm tackling grep.

The lack of --color support is what's kept Android from using toybox grep (and maybe performance concerns, but primarily --color), and the --color logic is more or less the -o logic with "different colors of output" instead of "enabled or suppressed". So I need to do a pass genericizing that a bit so the same tests can serve two masters. I did most of this work a long time ago, and it got buried (and lost) in a stacked pile of interruptions.

But what's been stopping me from working on it _recently_ is that grep doesn't pass its existing tests, and I should fix those up first. Combining "-b" with "-B" doesn't print the offset of the leading context lines, because we don't save the offset value on the linked list of previous lines. And there isn't really an obvious place to put it, not unless I want to create a new structure, and dlist_add() is allocating the old structure so switching that would unshare code.

It's a doubly linked list, so in theory I could sacrifice the ->prev pointer to store the offset, but I'm using the dlist so it can add in order (instead of reverse) without re-traversing the list each time (which is slow). And I could insert another structure in between (so dlist->data->line) but then there's _three_ mallocs which need to be freed (each in its own cache line)... ugh.

But if I collate it into a single allocation, I have to strlen() (if not strcpy()) an arbitrarily long string. This string is returned by getline(), I can't have that allocate extra data or fill out part of a structure, I don't know the length of what it's reading so it has to allocate it for me, and it returns a string.

That's why I didn't do this before. It's expensive for what it's worth. Obviously I can fix this, but I'm not happy about any of the approaches.

December 7, 2018

Oh good grief, the problem isn't where the jobs went. The problem is not a shortage of jobs. The problem is that a tiny fraction of the population is capturing all the gains. The problem is that 1% of the population ISN'T taxed enough to support the rest of the population (in indolence and idleness if that's what they want to do).

And no, it's not that we need the "brilliant work" of people like Bill Gates. His technology was utterly terrible, and people regularly wanted to KEEP USING the older stuff rather than be forced to upgrade to new (more broken) crap. What he did was CORNER THE MARKET. Android is based on Linux, which is open source, I.E. what people do when they have the free time to work on stuff they _want_ to do.

We need Basic Income, which is the same as an end to Bullshit Jobs. We need to end the idea that your day job is your identity, source of self-worth, and entire reason for existing. We got rid of the idea of nobility (my family owes fealty to Count Whatsisface who's owned by King Shinypants). We need to get rid of the capitalist nonsense that replaced it.

Technology advances until stuff gets too cheap to meter, as Star Trek predicted. (Star Trek also predicted a global war to kill off the billionaires, the same way we had various revolutions to get rid of kings.) Solar Panels and batteries do that to electricity, app-summonable self-driving car services do that to transportation, the internet does that to copying and transmitting information (from entertainment to weather reports to education)... Khan Academy is a thing somebody did for free, and then arranged to be able to do more of it because he _wanted_ to do it. Habitat for Humanity is 2 million people/year volunteering to build houses _now_ when they have day jobs to work around. Detroit rotted in 2008 because we wouldn't let anybody move into the abandoned houses until they collapsed, and "The Big Short" showed entire subdivisions abandoned and unmaintained. The Green Revolution _quadrupled_ the planet's food production and 40% of all the food we currently produce is wasted before it's ever _offered_ to anybody to eat. Capitalism is regulating scarcity that DOES NOT EXIST, so capitalism is CREATING SCARCITY. Mere subsistence is basically _free_ now, we just limit access to it.

Billionaires get rich by cornering the market. Clowns like @jack created twitter _before_ they knew it would make them rich. All the "we need to let people become rich to encourage them to create stuff" ignores the fact that they can only ever become rich AFTER they did the thing they did that made them rich. J.K. Rowling wrote Harry Potter on the UK version of welfare. The Xerox Parc guys who created modern computer designs (GUI, LAN, object oriented proramming, etc) never particularly profited from it.

And rich bastards also say that poor people need to stay starving to encourage them to work! But rich people need to go from "I have $3 million in the bank, at even 3% interest I can spend $90k/year literally forever, I never have to work again" to "I have enough money to buy Florida" or else they'll take their ball and go home and nobody will get Rearden Metal and that'll show us all (the mad scientist's "fools, I'll show them all" speech except passive-aggressive and based on the assumption you're such a special person the rest of us can't possibly live without you and your big plan is to go off and _sulk_)...

If we invent magic so you can just wave your hand and summon a meal, a shower and a change of clothes, a warm room to sleep in, or a health potion, and nobody ever has to pay for any of those things again... people complaing that it's the end of the world becuase "how will I get a job, nobody needs to hire me!" are INSANE! It's the WRONG QUESTION, it's an IMAGINARY PROBLEM. Our society suffering from a SHORTAGE OF JOBS is like the indian caste system and european nobility, it's NOT REAL. It's a social construct.

Seriously, when the Boomers die, their assumptions about how the world works need to die with them. It does not match reality. The problem is grandpa refuses to stop driving despite endless crashes. The boomers have a death-grip on power, and continue to fight the last war. They have become the problem.

An "economy of ideas" is basically people getting data from and putting it into the internet, we _know_ that's most efficiently and effectively done as open source. (If it's not published and peer reviewed, it's not science.) The "decline of the manufacturing econmy" is because we can make 100 times as much stuff with 1% as many people, not because we're making or using fewer refrigerators or t-shirts. We don't repair things anymore, we throw them away and get a new one because it's cheaper. When Paul Krugman says "the long-term trajectory of our economy... is moving steadily away from making physical stuff and toward providing services" he doesn't mean we have less stuff, he means if we can now make 100 times what we can use, then we only need 1% as many people making it to meet our needs. In a Star Trek future with replicators, you don't need anybody sitting at a spinning wheel all day every day. "Telephone Operator" used to be a real job (dial zero to speak to an operator), it's not anymore.

The problem here is capitalism. The old solution has become the new problem, and Clay Shirky gave a Ted Talk on what happens when an instutition realizes they have become the problem: the institution goes through the khubler-ross stages of grief. (It's a really good talk.)

December 6, 2018

I am amazingly burned out at work. I'm sitting in a cubicle, I'm overweight and too emotionally drained _by_ sitting in a cubicle to exercise or spend much effort on meal preparation (breakfast was two clearance protein bars, lunch was an avacado, eaten straight, dinner last night was an entire rotisserie chicken, which I regretted afterwards... oh and a pistachio croisant and large strawberry/coconut "pink drink" from starbucks).

I'm alone in a city hundreds of miles from anyone I know socially, and the project I was working on that engaged my interest enough to move here... more or less finished up last month.

The money's still great and they want me to stay another 6 months. I'm trying to hold off on making a decision about that... but I'm worrying about my health. And I'm also starting to do the "I know what I need to do next but can't bring myself to do it" mental block thing that says I'm REALLY burned out...

One more week until my month off, the question is should I just cut ties rather than leaving them dangling? What's the professional thing to do here?

December 5, 2018

The big laptop's battery is old. It didn't get used much as a server, but it's wearing down really fast when I'm out and about with it. The problem is, the battery estimator isn't reliable anymore, so it hard powered off when it said it had 6% left, and now it's just hard powered off when it said it had 40% left.

I should get a new netbook, but dunno what to look for.

December 4, 2018

I removed CFG_SORT_BIG. Back when I left busybox micromanaging the commands so you could switch off parts to build smaller versions seemed like a good idea, but it's not what toybox does. When you sit down at a toybox system, if it has "sort" you should know how that behaves. Having multiple possible sets of behaviors per command... not good. I've been trimming that sort of thing from ls and such, sort was a holdout I just noticed.

I noticed because I'm doing the FLAG() macros, and it's not as obvious a win as I was hoping. I've converted a half dozen files and the result is... mostly better? But I'm nowhere near eliminating directy use of toys.optflags because there's lots of places where I'm doing bit shifts or exact comparisons.

It's one of those things that's small enough it's hard to see the right thing. The macro is basically just:

#define FLAG(x) (toys.optflags & FLAG_##x)

I haven't worked out how to do a FLAG(x, y, z...) sanely, so the "toys.optflags&(FLAG_x|FLAG_y)" become FLAG(x) || FLAG(y) and I hope the optimizer's smart enough to turn that back?

I think it's better for the:

if (FLAG(x)) {

case, but... Sigh.

December 3, 2018

I am really, really, really burned out at work. It's a good job, in that it pays well, is not unethical, is easyish for me to do, and they value me. But it's boring and I sit in a cubicle all day living in a city I don't know anybody in, and now that the project I came here to do has finished up, it's really hard to get enthused about the new project they want me to do without novelty to get me over the hump.

The new project is yocto with systemd, on x86 and arm. The userspace is still a giant multithreaded .NET app running under mono (with enough "native" code extensions that it's about half of what's running), which implements "build control systems" (I.E. really high end thermostats that also measure air quality and such).

I'm hoping the month off will help me recover enough to do another 6 months of this? The money would be useful. Unfortunately, nothing about the position OTHER than the money is at all interesting, and my entire career has been based on interesting work over money.

December 2, 2018

Finished the test.c rewrite and got it checked in. My version passed all the tests I generated yesterday, but I've since realized it should have been "! ( ! x ) -y ! ( ! x ) -y ! ( ! x ) -y ! ( ! x )".

The ability to speed up audio I'm listening to helps me pay attention to it. It varies per speaker, but something like Rachel Maddow (who literally repeats each piece of information she gives about 6 times in a row) at normal speed my attention wanders and then I go "wait, I just missed 30 seconds"... But if it's sped up I _have_ to pay attention, and I get enough information per time period to keep my attention. That's why when I get linked to something like this where I'm very interested in the topic and want to hear the material... I try to dig it up on youtube where I can go speed->2.0 in the pulldown.

(Rachel's a great speaker, this is a style thing she's chosen to do on her show. I think it's because her audience is mostly retirees? Dunno.)

December 1, 2018

I have to implement priority handling in test.c, so I'm writing a program to emit all possible combinations of:

( x ) -y ( x ) -y ( x ) -y ( x )

Where x is true/false, -y is and/or, and each parentheses may be there or not be there. First pass said that's 15 bits worth of things that toggle, or 32768 possible test strings.

But parentheses come in pairs: there are 4 ( position and 4 ) positions, and count must match. On each side there's 1 "0 set" pattern (0), 4 "1 set" patterns (1 2 4 8), 6 "2 set" patterns (3 5 6 9 10 12), 4 "3 set" patterns (7 11 13 14... basically ~ the 1-set pattern where it's the zero moving instead of the 1), and 1 "4 set" pattern (15). And then those 4 bits for the left parentheses need to match each same-count set of right parentheses, so 1*1 + 4*4 + 6*6 + 4*4 + 1*1 = 1 + 16 + 36 + 16 + 1 = 70 total valid patterns. And 7 bits of other true/false or and/or variance, so 70*128 = 8960 tests.

EXCEPT, I didn't eliminate "left parentheses must come before right parentheses" invalid combinations? Hmmm... I can _test_ for them, but I'm not sure how to _sort_ them out. Eh, program says 27 of the 70 are illegal parenthetical sequences. And as long as I'm filtering, I might as well just iterate through all 32768 and filter out the bad ones. (I'm writing this in C.)

I was thinking of trying to do a shell script to make all these sequences in tests/test.test (there's a name), but iterating through 30k nontrivial function calls is slow in shell, and I remember Divya's chmod tests. (I cut the number down, and 90% of what that was checking was whether the _kernel_ was getting the permissions right, but still...) The more recent bc tests taking way too long to run are a similar cautionary tale.

November 30, 2018

George Bush Sr. died. I remember Dave Barry fake-quoting him about "The United... whatchamacallem... states. Barbara and I have a summer home there." He was the ex-head of the CIA (who was sad he wasn't allowed to stay on when Jimmy Carter became president), and his foreign policy consisted of having numerous foreign heads of state on speed-dial ever since then because he knew them all personally. His domestic policy was basically nonexistent, which means he did less damage than his republican colleagues. he _merely_ presided over the savings and loan crisis and a prolonged recession lasting his entire term.

(Aside: the 1929 stock market crash and decade-long great depression led to a bunch of New Deal banking regulation under FDR that prevented the problem from happening again for 50 years. Then Reagan dismantled it all due to the fish filter fallacy, and we had the S&L crisis around 1991, the dot-com bust around 2001, and the mortgage crisis of 2008. Yeah there'll be another one along soon, we're overdue.)

Bush Sr. built on the GOP's legacy of racism (Willie Hornton, war on drugs, etc), but that was continuing the policies invented by Barry Goldwater and first implemented by Richard Nixon. Bush Sr. invented the term "Voodoo Economics" when campaigning against Reagan for the 1980 presidential nomination, but then embraced them as Vice President, and his own administration increased the national debt more in 4 years than Reagan had in 8. He instructed the IRS not to ever audit millionaires (luckily they ignored him), and his "no new taxes" pledge was all about increasing inequality in the USA to basically rebuild the aristocracy at the expense of the poor. His main achievement was a war with a much smaller opponent, the highlights of which were basically war crimes. (But that's war for you.)

And yet by modern GOP standards, he was a paragon of virtue. Everyone we've had since has been a mustache-twirling cartoon villain. (In the case of Darth Cheney, standing behind an affable buffoon who explains why H.W. liked Dan Quayle: must have reminded him of his son.) And the two _before_ that were Ronald Reagan, who lowered the top tax rate from 70% to 28% simultaneously bankrupting the country and creating "The 1%" problem and then only avoided impeachment because exculpatory altzheimers turned the Iran Contra affair into "I don't remember all those crimes", and Richard M. "yes totally a crook" Nixon.

In that police lineup, yeah he's the least worst. I suppose that's a legacy. H.W. racked up more debt in 4 years than Reagan did in 8, but the soviet union collapsed on his watch so you could imagine a point to it if you tried really hard, and it wasn't yet so bad Bill Clinton couldn't balance the budget again afterwards and pay down some of the debt. (Until his son Dubyah screwed it up again.)

November 29, 2018

Got a bus to Fade's for the 14th. Accidentally got it for the 20th and had to go to greyhound and get it fixed.

Trying the Starbucks just south of work, which is open until 9. Working from Tiny Apartment is just too depressing, but I tend to work until I'm tired and then autopilot my way home. (Well, grocery store and then home.) The starbucks on the way back to the apartment from work closes at 7, this is an improvement.

November 28, 2018

Elliott has decided to make toybox work on macosx, and sent me many patches. He's gotten a largeish set of commands to build:

basename cat chmod cmp comm cut dirname dos2unix du echo egrep false fgrep file grep head help hostname id ln md5sum mkdir mktemp od paste patch pwd readlink realpath rm rmdir sed setsid sha1sum sleep sort tee timeout true uname uniq unix2dos wc whoami xargs xxd


Yes, for once I'm happy more work is coming for me out of left field interrupting my work on test.c, because A) it's in service of the hermetic build stuff which is moving Android closer to self-hosting, B) this is work I meant to do when I had the SEI mac laptop, and never did because I couldn't get the development environment working, because I'd somehow borked my apple store login in a way that even Jeff couldn't figure out how to fix.

I break everything. I do Linux because I can strip and field service the OS down to the source code, which isn't an achievement so much as scar tissue from years of battles with everything everywhere constantly breaking. (The ability to stick a printf into the source code helps you debug, you don't necessarily need to modify the resulting system once you've found you can to stub out 5 different sync() functions so vi doesn't gratuitously hang for 5 seconds every 100 characters when the disk is loaded, or that you can chattr +i /etc/resolv.conf to make it USE THE NAMESERVER I GAVE YOU DARN IT, and so on.)

I use close to stock OS images because the alternative would be constructed entirely out of spackle and anger and be a 24/7 job keeping it running. Ubuntu 14.04 behaves because I know where to hit it. I refuse to open the systemd can of worms because it smells just like Windows, a rickety pile of bad assumptions with hair gel in the shape of a White Dude standing in front of it saying "trust me". It would be a full time job finding bug workarounds. That's why I'm probably going to devuan next laptop.

November 26, 2018

Working on test.c. The contributed one had a lot of duplicated code such as all the -eq an such strings listed twice in two different functions, which sets my teeth on edge because the same thing in 2 places can get out of sync. (The two hard problems in computer science are naming things, cache invalidation, and off by one errors. The underlying generic issue with cache invalidation is "the same thing in two places can get out of sync". And pile up and waste space.)

The reason it's been in pending so long is "test" is an annoying design. It wasn't so bad before -a and -o got added, but now parsing it requires careful sequencing. (Remember: 0 is true in shell return values.)

$ test ; echo $?
$ test == ; echo $?
$ test -e == ; echo $?
$ test -e == -e ; echo $?
$ test \( == \) ; echo $?
$ test \( == \( ; echo $?

You have to look at the number of arguments to see what kind of test you're running. For two arguments, the first is special. For three arguments the second is special. Note how the parentheses above don't group arguments but are strings that == is comparing (otherwise the first parenthetical would be "true" because the one argument string is nonzero, and the second would be a syntax error because the parentheses don't match. In theory "-e == -a -d \<=" could be a valid test if you're saying a file called "==" exists and a file called "<=" is a directory, but the parser sees "-e == -a" as three arguments and then -d is neither -a nor -o: boing. Too many arguments. Of course you can use parentheses to force it to interpret it the other way, and functional parentheses vs "it's just a string argument" is yet _more_ contextual interpretation.

Then there's:

$ test -a ; echo $?
$ test -a -a ; echo $?
$ test -a -a -a ; echo $?
$ test -a -a -a -a ; echo $?
bash: test: argument expected

I.E. "one argument is a non-empty string", two is "file '-a' exists", three seems to be non-empty string and non-empty string, and four is "non empty string" and "non empty string" and... argument expected. Hmmm, how about:

$ test -e == -a -n
bash: test: too many arguments
$ test -e blah -a -n
bash: test: argument expected
$ test -e blah -a x ; echo $?
$ echo $?
$ test -e blah -a bork bork bork
bash: test: too many arguments
$ test -e blah -o bork bork bork
bash: test: too many arguments

So "-e == -a -n" is not "a file named == exists, and -n is a nonzero string". But "-e blah -a x" reads that way. But -a continues parsing in case of a later -o... and -o looking for a later -a...

$ test 1 -a 1 -o "" -a "" ; echo $?

And this is basically the logic I implemented for "find". Which was very much not easily genericizeable. (And required two parsing passes, although that was mostly because it was applying the same command line to multiple files.)

Don't get me started on "-e == -a -o -d != -o", and "\( "x" \) -a \) == \)". Technically "-e == -a -d \<=" could be a valid test if you're saying a file called "==" exists and a file called "<=" is a directory.

Oddly enough, "test !" would be 0 if interpreted _either_ as not followed by no arguments, or as nonempty string. And how about test \( \( "" \) -a "" \) -a ""

November 22, 2018

At Fade's, my sister Kris plans to pick us up to drive to a thanksgiving thing at 2.

My turtle board finally died. There were like 3 small things wrong with the initial round of prototypes and the most serious one was that the USB mini-A connector (which both powers the board and provides the serial console) was only held on by the data connections, it didn't have a clip to prevent the solder from breaking when you plugged and unpluged it. A raspberry pi case tends to hold it still, but I had to take the case off to reflash it (which means pressing one of the buttons to bring the onboard flash utility up), and... the connector broke. Sadness. I was trying to flash the j32 bitstream on it (I got one for the EVB working, and Niishi-san sent me a Turtle build) so I could maybe poke at kernel porting trying to get the new mmu working in Linux. (The sh4 mmu implementation turns out to be hugely overcomplicated and would never fit in an LX9 FPGA, so we did something simpler that should work fine but needs different kernel code to drive it. So right now it's still booting nommu kernels.)

Sigh, Thalheim broke the mcm binary toolchains. Ahem, I mean he "improved" them into decision paralysis, where users who've never used them before are confronted with a menu of choices they have to make before they have enough information to make them. Each decision required at the start reduces user uptake by some percent (they're not invested yet and can leave it on the todo list until "later"), and faced with a Starbucks menu of options when I've never had coffee before (and it won't work if I get it wrong, and I won't know why) I'd just walk away, so how can I expect anybody else not to? I need a generic "compiler" that they can use for each target. Asking "do you want to try mips, powerpc, or m68k" is already Enough Decisions.

Gimme a damn hammer. I'll work it out from there. (I remember volunteering at Habitat for Humanity when I was like 25. I brought a small hammer from home I'd written "Mine!" on in sharpie, which I used for all my nail whacking, and they said that's a finishing hammer meant for drywall nails and I should use some other hammer, and I didn't because it's a hammer and the nails went in and I was not a professional carpenter. I was still learning not to split the wood or bend the nails, and tool I was familiar with beat tool I was not familiar with. I was _not_ up for keeping multiple kinds of hammer straight, thank you. If I was still at it I'm sure I'd have graduated to hammer knowledge, but they stopped letting me be on site in flip flops so I stopped voluteering and took a tae kwon do course at UT instead. I earned a yellow belt with white stripe, the most flattering rank ever, which means I'm qualified to run away.)

November 21, 2018

Introduced to a new coworker as "the toybox guy" and I assumed she wouldn't have heard of toybox, but she had. (Then again said coworker, when her badge was flipped, was the original author of the shadow utils package, who I last saw at a LUG meeting in Austin in the 90's. So I'm not sure how representative that is.)

Bus to Fade's for thanksgiving, so short day at work.

Mimi Zohar pinged me about the cpio xattr/initramfs stuff again. I should find some cycles for that...

November 20, 2018

I taught the toybox test suite that "make test" should only run tests with the executable bit set, so I can have "pending tests". (You can still run "make test_commandname" individually, or TEST_ALL=1 to override it, it's just the test everything mode that cares.)

I've needed this forever because the test suite is full of known failures. Now I'm going through and chmoding the command.test files where all the tests _do_ pass, and cleaning up a few low hanging fruit fixes while I'm at it. An auditing whack-a-mole session, as it were.

People regularly ask how they can help with toybox development and I suck at providing good answers to that. (The cleanup.html page explaining what I _do_ was one attempt. The pending directory is me trying to get out of their way and let code go in without being up to my standards yet.) And way back when I pointed them at the test suite as some place they couldn't do much damage, and I wound up with a bunch of tests that didn't test anything obvious, tested system calls rather than toybox's implementation (as in a half-dozen variants of the same command that go through the exact same toybox code path and only vary in what they feed to a kernel syscall), tests that took 15 minutes to run, and so on. There are tests toybox doesn't pass because whitespace in output changed, or an error message got rephrased. Some tests pass for toybox but not for other implementations (in which case what are they testing exactly, TEST_HOST has to work too or it's not a good test)...

Plenty of it's my fault too. I add tests to the test suite as todo items, tests that fail now in toybox but not ubuntu, to remind me to change the code so toybox does a thing right. (Some I check in, some I don't...) My auditing pass is hitting low hanging fruit like grep.c where I can implement some small todo items in the command and _then_ the test passes, but I should finish cleaning up test.c first. Too many dangling wires I've lost track of what they connect to, gotta close tabs...

November 19, 2018

Bug report from someone using toybox under systemd in yocto. That is a combination I did not expect.

The first yocto cleanup patch touched fsck.c, getty.c, test.c, tftp.c, and tftpd.c all out of pending, which means somebody at yocto is using those commands in pending. (With -Werror flag that broke the build.)

Of those, test.c looks like the lowest hanging fruit and I've made stabs at cleaning it up before. The problem is A) test is kinda badly specified (behaving differently depending on the number of arguments _but_ in a sort of recursive way), B) this implementation has a lot of duplicate code that should collapse together if I can figure out how to order it right.

November 18, 2018

Weekend, got some decent coding in, half of which was tackling the backlog of patch submissions and bug reports.

November 16, 2018

The people lying about global warming are the same people who used to run the Tobacco Institute, and before that they worked for the Ethyl Institute whose job was to manufacture doubt and uncertainty about whether leaded gasoline was harmful. There's literally an industry profiting from hiding the bodies of people billionaires kill in search of profit, and it goes back 100 years.

A century ago the billionaire (adjusted for inflation, anyway) head of the Du Pont family put the toxic metal lead in gasoline, for literally no reason other than profit. The Du Pont family made their fortune selling weapons during World War I, then bought control of General Motors in a recession when the founder lost his stake to a margin call. Irenee Du Pont became president of GM in 1919, and he retired from the board of directors in 1958.

Oil goes back further than that, Samuel Kier built the first oil refinery in Pennsylvania in 1854 as kerosene replaced whale oil for lighting (saving the whales from extinction) but refining produced all sorts of other stuff that they either had to find a use for or pay to dispose of (after a river they were dumping it in caught fire). The first production run of the Ford Model A had a switch on the dashboard to let the carburator use alcohol or gasoline, but gasoline won out by being cheaper.

Early gasoline engines had a chronic problem called "engine knock". Engine pistons compress gasoline vapors, and when the piston goes all the way down the spark plug gives off a zap to ignite it at just the right moment, driving the cylinder back up and turning the crank shaft that makes the wheels go. But Boyle's Law says when you compress gasses they get hot (same amount of heat in less space, the heat gets squeezed out), and pure gasoline in a hot engine tends to go off before the piston goes all the way down, which means the explosion tries to drive the crankshaft _backwards_. This is loud, bad for gas mileage, can cause the engine to stall, results in a bumpy ride, and wears everything out faster. But worst of all, in the early days when engines were hand-cranked to start them, a backfire would break the cranker's arm (or worse; people hit in the head or chest often died, early cars were _dangerous_).

In 1913 a man named Charles Kettering invented the "self-starter", using a small electric motor to start the engine instead of having a human stick a removable crank into the front of the vehicle. (The trick to getting a tiny electric motor to turn a big gasoline engine was to run LOTS OF VOLTS through it, which would melt if kept up long but when used for a few seconds at a time the starter motor cooled down again and was fine.) Starter motors made automobiles WAY more popular, and GM bought Kettering's company Delco in 1918.

But engine knock was still a problem: it was loud and terrible for gas mileage (think all the old cartoon sound effects of cars constantly backfiring, that's because for the first 20 years they _did_), and more powerful engines that could go faster needed higher compression (higher pressure = more force = more speed), making premature detonation more common. So the Du Ponts made Kettering the boss of a man named Thomas Midgely they'd hired a couple years earlier, and had them work on the problem.

Engines that ran off _alcohol_ (like the original Ford Model A could) didn't have much engine knock. Neither does diesel oil, which is thick heavy crap that needs a glow plug to set it off (basically the cigarette lighter that a car's "cigarette lighter" port was originally invented for, only inside the engine), but gasoline was the cheapest and most plentiful waste product of the kerosene refining industry and they really _wanted_ cars to use it. Midgely's research team quickly worked out that if you mix a little bit of ethyl alcohol into the gasoline, it raises the ignition temperature so the fuel doesn't go off before the spark plug sets it off, solving the engine knock problem. They patented adding alcohol to gasoline to reduce engine knock, but didn't publicize their finding because Du Pont couldn't really PROFIT off that solution. The production of alcohol is older than recorded history and the stuff's available at every liquor store: anybody could pour some vodka into their tank each fill up without asking permission. It helped them sell _cars_, but Du Pont wanted to profit off of the _fuel_ too. He told Midgely and Kettering to find a new chemical with a patentable production process.

So Midgely and Kettering tried to come up with an alternative additive they could patent, and they cheated. They created a compound that was 5 ethyl alcohol molecules stuck to an atom of lead, which separated again when you heated it up. This tetraethyl lead reduced engine knock just like normal alchol had, because it turned right back into alcohol at high temperatures. The only difference was A) it had a bunch of lead in it, B) it was a new compound so Du Pont could patent its production and charge for every tank full.

Remember, they'd already patented adding alcohol to gasoline 4 years earlier, they KNEW that this new stuff was just "alcohol plus poison", and they did it anyway. Midgely went on later in his career to invent CFC refrigerants, and has probably done more ecological damage than any other individual scientist. He also once flew back from getting his blood chelated in sweden (basically dialysis to take the lead out) to give a demonstration in Washington DC where he washed his hands in leaded gasoline to prove how "safe" it was. Everyone involved with this was a total hypocrite, but it's the billionaire Du Pont family that profited the most from it. This was done for money, and for no other reason.

Tetraethyl lead was trademarked under the name name "Ethyl", and its marketing explicitly avoideded any mention of lead, because everybody _knew_ lead was toxic. Standard Oil (also largely owned by the DuPont family) invented a more efficient manufacturing process for the stuff (and continued to tweak it for years, restarting the patent clock each time) and they and GM formed the Ethyl Gasoline Corporation with a manufacturing plant in bayway New Jersey, and over the next year 32 of the 49 workers making the stuff were hospitalized for severe lead poisoning, and five of them died. That got people's attention, but the Du Ponts kept a lid on the press and when it got really bad (they started selling it in 1923 and cities started banning it a year later) they just pulled the product from the market for a bit (still not telling anybody that normal alcohol worked fine to prevent engine knock) while a guy named Frank Howard masterminded a public relations program to get everybody buying it again, and became director of the Ethyl Gasoline Corporation. Years later the Federal Trade Commission forced Frank out of Standard Oil for collaborating with Nazi Germany, but he kept his seat on the board of Ethyl.

Big Oil has always been full of lovely people.

The Ethyl Gasoline Corporation created a think tank called The Ethyl Institute to produce doubt and uncertainty around whether lead in gas was harmful, even though people IMMEDIATELY knew it was. They bribed government officials ("regulatory capture"), ran massive PR campaigns, and basically ran out the clock on 40 years of patents by preventing the other side from ever _quite_ being able to bring itself to act to stop them. The point was delay, because each year they could keep doing it, they made billions of dollars.

When the _first_ set of patents expired, some of the "reserachers" from the Ethyl Institute founded the Tobacco Institute, and ran the same cloud of squid ink hiding the deaths for the profitable tobacco industry. And when they finally gave up on that (in the US; they moved overseas and still profitably give cancer to brown people around the world), the same researchers moved on to global warming denialism protecting the $6 trillion dollar annual revenue of the fossil fuel industry. It's not just the same playbook, it's literally the same people (although it's been so long there's about 3 generations of them now).

There are many many many good articles on this topic, but billionaires don't want people to know about it because it's like a serial killer's first victim: they weren't good at hiding their tracks yet so it displays their Modus Operandi in plain view.

(Oh, and the phrase "natural gas" is a marketing term for methane created to help it displace coal gas in the 1920's. The old phrase "now you're cooking with gas" was invented by one of Bob Hope's sponsors as part of this push. These days a lot of people are intentionally saying "methane" again to make it clear that gas power being less bad than coal or oil doesn't make it a good thing. All the brouhaha about cow farts is a distraction from the TONS of methane leaked into the atmosphere by "natural gas" wells. Yes methane is a far more effective atmospheric insulator than carbon dioxide, but its half life in the atmosphere is about 7 years before it breaks down to normal CO2 and water. All the methane that comes out of cows went INTO the cows as animal feed, meaning that carbon came from plants which took the carbon _out_ of the atmosphere quiet recently, and thus makes only a transient contribution to global warming. But "natural gas" is fossil carbon from the dawn of time that permanently increases the carbon in the atmosphere even _after_ it breaks down from its more efficiently insulating form, causing us a decade of grief and then still being a permanent problem afterwards. The natural gas industry is trying VERY HARD to get gullible lefties to blame cows instead of fossil fuels for global warming, because "if you're not vegan you can't complain about fracking" is a pretty effective way to buy another decade of $6 trillion annual profits. The strategy has always been to _delay_ the opposition while squeezing blood from the stones as long as possible, then move on like a pardoned Richard Nixon afterwards or retire to aruba or something, untouchably defended by giant piles of cash and the lawyers and bodyguards the interest they earn can employ.)

(Oh, we now know that most of the fossil fuels were made at the same time, because when trees evolved cellulose nothing could break it down for about 2 million years, until fungi figured out how to eat it. So it just piled up like plastic is doing today, and tons of it got buried and turned into coal. There's a bunch of historical periods where the climate got really weird and stayed that way for a very long time.)

November 14, 2018

OSI did the thing! 0BSD is now Zero Clause BSD everywhere! Woo!

(Ok, in some places it's hyphenated, and in some places the words are in a different order, but that's standards bodies for you.)

Of course the one guy who disagreed did not lose gracefully, he had to post his "BUT I STILL DON"T AGREE" to the list and then made their new page on it start with a "Note: Despite losing, I was right" disclaimer paragraph, that's... sigh, really?

But if that's the best he can do now to sow confusion and retard adoption of the license it's still way less than his first attempt at jamming up the works, which _did_ do a lot of damage. (99% of the people out there couldn't care less about OSI, it's been replaced by SPDX. So the _details_ of what OSI says don't matter, the important thing is the headline is no longer "opinions differ on shape of planet".)

I should update the toybox license page's end notes.

November 13, 2018

I got a llvm build script from the hexagon guys, which... doesn't work for me because I didn't have cmake installed, then because the cmake in Ubuntu 14.04 is too old. (Not hexagon's fault, the llvm build is insanely brittle. They need the latest C++ spec and the latest cmake. Not a tool you want to bootstrap systems with.)

I'm sympathetic with the desire to move off of make, my own make rant was spread across two posts to the old Aboriginal Linux mailing list. (It's one of many things I should probably turn into a proper essay at some point, but I don't because who would read it?)

Meanwhile toybox's scripts/ is now detecting that if you set "LDFLAGS=--static" and rerun the build, it should re-link, but "make chrt" is going "nope, chrt binary is up to date". Because in my own project I implemented the make logic as a shell script (which is at least vaguely portable), but provided a conventional "make" wrapper for build UI compatability reasons (people are used to configure/make/install), and it's hard to keep that a simple thin wrapper. The make wrapper is Doing A Thing it probably shouldn't be, and it's hard to get it _not_ to remotely efficiently.

November 12, 2018

I share the pain of this message on the OSI license approval list. "the last thing I'd want... would be to get involuntarily tied up with a prior, controversial license that I didn't write, different in many crucial respects from my own..." Yeah, conflating is a thing OSI does. "As for me and my house, I'm no longer interested in OSI approval." Is anybody? SPDX replaced them.

I just want them to stop spreading an incorrect name for the license I got SPDX approval for. To me, de-approving the license and taking the bad page down is 100% equivalent to changing the name. Either way they would stop spreading bad information.

November 11, 2018

The bus back to Milwaukee was delayed 5 and 1/2 hours. If they'd told us this at the outset we could have made other plans and come back, but first it was delayed half an hour and then there were NO UPDATES (and 6 people waiting at the St. Paul pickup). The local greyhound guy had no info. The minneapolis greyhound location wouldn't answer the phone even when _he_ called. The national greyhound 1-800 number couldn't find _anything_ in their computer (the lady in Bangalore couldn't confirm this bus route existed, couldn't look me up by confirmation number or name...)

Finally the bus showed up 5 and 1/2 hours later. The story that the original bus had died and they had to wait for the next bus on this route to come in so they could send it back out again immediately instead of doing its normal maintenance. (They couldn't put us all on the next scheduled bus, which was already full.)

Not happy about the lack of communication here.

November 9, 2018

I missed my bus to visit Fade, or more accurately got on the right line number but heading to Chicago instead of Minneapolis. By the time I found out, my bus had left, and the next one's not until 6:40 (and gets in somewhere around 2am, I think).

Once again this administration's primary goal (literally the first project it pushed for after the inauguration) has hit a snag. Remember, every pipeline they don't get up and running ASAP it won't have enough total oil run through it to pay off the mortgage on the pipeline they took out to construct it. Oil go bye-bye in a single digit number of years as everything goes electric. It's _already_ cheaper to generate electricity with solar and wind, and storing it in batteries is on its own exponential cost curve running about 5 years behind solar. (Even the naysayers expect $50/kwh around 2025, despite all the headwinds the GOP can throw at it. Cell phones pump billions of dollars into battery R&D no matter what.)

All the fossil fuel discovery, extraction, transportion, refining, and consumption infrastructure that exists today is basically a stranded asset at this point. It costs more to repair than to replace, and soon it'll cost less to _run_ than to replace. The case for investing to install _more_ in it just isn't there, despite enormous political will from dinosaurs that profit from selling this stuff. All the back room deals, fire sales, and regulatory capture in the world won't hide stark business reality for too much longer.

I should reply to this post about old architectures in the kernel and there was a twitter thread about GPDR I've misplaced the link to. The thing is, patents expiring makes old hardware new again. Technology doesn't advance when patents are granted, it advances when they EXPIRE. The cell phone was invented April 3, 1973, and 20 years later it took off because all the PATENTS had expired. The great economic engine of the USA in the 1800's was ignoring european patents. China rapidly technologically catching up with the US in the 1980's and 90's was similarly powered by completely ignoring western IP claims. Patents are a giant drag on the economy, we just tend to ignore the control groups.

Patents were invented to prevent artisans from taking secrets to their graves, but that's a lot less of a problem these days. We know exactly what the 11 herbs and spices are, the Coca-cola recipe is public (it was appendix A of the book "For God, Country, and Coca-Cola" produced for the hundredth anniversiry in 1987, along with the three _different_ methods by which he got it)...) We can reverse engineer all sorts of stuff with modern laboratory analysis. The limiting factor is intellectual property law.

Jeff has 6 patents on SEI's core technology, and defending them internationally (not from lawsuits, just filing and re-filing all the paperwork to keep them active internationally, without which any Fortune 500 can set up shop in whatever country isn't covered) costs as much as a full-time engineer. I'd much rather we could do more work instead, but large predatory corporations want to steal everything they can, and when they prove to themselves they can't do what we do (yet again), they're too proud to come crawling back to us to use our expertise to do the work, so they abandon the project instead. (Thanks 3M and Siemens. Thanks a lot.)

November 8, 2018

Engineer at work talking about how they want to move away from OpenSSL to "WolfSSL" specifically because they want something proprietary that they can pay for "timely fixes".

I didn't even try to explain that "closed source" plus "crypto" should be treated about like "I put the lid back on that jar of sphagetti sauce but didn't stick it in the fridge, it should be ok, it's only been out at room temperature for 3 days, go on have some."

I did mention how post-heartbleed the open source community moved away from OpenSSL (which had disgraced itself after Theo successfully chased away all external reviewers) to other solutions, Google maintained BoringSSL, the FSF loonies did LibreSSL-free-gpl-free-copyleft-all-hail-stallman, the embedded world has BearSSL, and that yes this opened a window of opportunity for a proprietary company to market themselves, but as the industry re-consolidates around a preferred solution I expect WolfSSL's market to dry up and then you'll be paying to keep them alive until you move back to an open source solution. But hey, this won't be a problem for maybe 5 years which is long past management's decision horizon.

November 7, 2018

Looking at the sh4 syscall generation rewrite from Firoz Kahn, but gmail strikes again where multiple copies of the same message with different headers are dupe-killed and you CAN'T STOP IT, and since he cc'd both linux-sh and linux-kernel, the 0/3 message describing the series wound up in my linux-sh folder and the actual 3 patches wound up in linux-kernel.

The kernel's "patch inline" thing combines badly with both mime encoding of email (gratuitous = escapes) and html archives (even if they have have the raw data in <pre> blocks they replace > and such with &gt; and when they don't have <pre> wrappers there's a <br> on the end of each line...) And you can't just cut and paste something that's tab intended and expect the whitespace to survive. (Sometimes it does, sometimes it doesn't. But this is one more reason toybox uses spaces instead of tabs for its indents: I can cut and paste patches without worrying so much.)

Whitespace is important in patch files: each context line starts with a space (meaning blank context lines are one space followed by a newline), and if your line is tab indented it's space then tabs. Context lines have to match exactly or the patch hunk fails. (There's an ignore whitespace flag but then the indentation you're _applying_ is likely messed up and you wind up with mixed tabs and spaces in a file, which becomes obvious when you change tab stop. Again, two spaces means I get a lot less mixed tabs and spaces in submissions. They _happen_, but you've generally got to be four indentations deep before it even comes up and even then it's seldom an even number of them so it's a lot less likely to trigger.)

November 6, 2018

Fade's flight to Austin was cancelled. Never fly Frontier for something important.

(She was trying to fly home to vote since voter suppression ate her absentee ballot. If you don't think the GOP doesn't voter suppress white people too ask why a college ID doesn't count as voting ID but a concealed carry permit does. There are entire books about how racism is a tool the rich have used to distract gullible poor whites from their slum lords bleeding them dry with payday loans and such for _centuries_. Even when the billionaires internalize the racism and misogyny they're _primarily_ tools of oppression by rich bastards dividing and conquering their slaves, their renters, and their harem. France had the right idea, guillotine the lot of them.)

Spent all day too stressed to get much done.

Didn't canvas last night (closest campaign HQ that might have an address list and/or door hangers was a 5 mile bike ride away and daylight savings time means it was dark by the time I left work), but I did get up early to stick a "Vote today" sign on the front door of my apartment building. When I got home somebody had stuck much _better_ signs (with the poll location and everything) on both the front door and the elevator. Yay us!

Mixed results, but... net positive? We didn't get any of the longshot rock star candidates (in texas at least, yay AOC), but the solid trudging work mostly went for us. The senate was _designed_ to let cows vote, so we weren't going to win that one this time around.

November 5, 2018

Finally got the x32 target working in mkroot. That's probably enough to cut a release, I should do that. (Dust off the mailing list, or do it on the toybox list? Hmmm...)

Now I'm trying to add native compiler support, which means packing up the native compiler directories, which means I taught the build scripts to call mksquashfs on the native compiler directories. And while I'm there, unless NO_CLEANUP is set have it delete the native compiler directory afterwards, and blank "build" before each build (so the musl-cross-make work directories don't accumulate). Some of my builds are in a VM with limited space, its disk keeps filling up...

November 4, 2018

Poking at the the 4.20-rc1 kernel (which I once again didn't get any commits into the merge window for), and the CONFIG_UNWINDER_ORC bug is back. Switch to the frame pointer unwinder that doesn't need libelf installed, and the Makefile still barfs demanding libelf be installed before it'll continue. (I.E. the kernel developers are not regression testing on a system that DOESN'T have every possible build dependency. I'm weird for caring about the forest of unknown crap sucked into the build to produce who knows what.)

I worked around it by deselecting CONFIG_STACK_VALIDATION entirely, which meant deselecting RETPOLINE first because that was pinning it on, and then it went:

CHK include/generated/compile.h
make[1]: *** No rule to make target `init/main.o', needed by `init/built-in.a'. Stop.

So yeah, that's an -rc1 all right.

November 2, 2018

Fade's ballot never arrived and other people in my twitter feed are also being voter suppressed. Even in blue states, any GOP officials anywhere in the plumbing are throwing in their last hail mary passes to cheat.

Going through my toybox todo list. I've been meaning to post an update to patreon forever, still doing the "guilt about having been so long between updates, need to have a big update to make up for it"...

October 31, 2018

Yup, I didn't send in a talk proposal to scale either. Thought about doing the 3 waves thing but... I gave that talk at Flourish, was well-prepared and proud of the result, and the recording was never posted. I did the work, and _they_ didn't do the work (again!), and I should really just do both sides myself which means podcasts. But without an externally imposed deadline (and an audience that shows up to hear me and will be disappointed if I don't perform), I never seem to bother...

I could do something on hermetic android builds, on mkroot, lessons from the j-core experience, on 0BSD, an update on toybox, a redo of the "simplest possible linux system" thing (now with less jetlag!)... but who really wants to hear from yet another white male in 2018?

Also didn't go out for Halloween, because it's raining. Well, I went to Wendy's and worked on the toybox release notes.

Toybox 0.7.8 is out.

October 30, 2018

The Scale CFP (Southern California Linux Expo Call For Papers) expires tomorrow, I should send in a talk proposal. Working on the toybox release instead.

October 29, 2018

At my table at Wendy's. The boil water restriction got lifted yesterday so I've got soda refills again, and... my laptop's mouse froze a minute or so after I opened it. (It was working, and then _stopped_. Suspend and resume didn't fix it.)

Navigating from window to window with mouse commands, closing everything so I can reboot instead of getting work done. (I use open windows as notes-to-self about what I'm in the middle of, so when it's time to reboot the box I need to write them all down so I don't lose track of my place. Yes, "collate todo lists" is a perpetual todo item of mine...)

And after a reboot, the mouse is back and working fine. Linux on the desktop! There's a very long list of reasons it didn't take off. (Sigh, doing that used up the battery. It lasts about an hour and a half. The netbook's battery lasted at least 3 hours.)

October 28, 2018

Flew to Austin on Frontier Airlines. They would appear to be to Southwest what Southwest is to Delta. The plane was almost full yet I had a row to myself (and the guy across the aisle had the row to himself, although somebody moved into one of those seats after takeoff.) I dunno if this is a "give first timers good impressions" thing in their computer, or an artifact of being the last row in "zone 4" and the computer filling it in sequence? Either way, more enjoyable flight than I expected. (And it was like $30 plus airport fee, really cheap. Ok, there was no free in-flight beverage service (tempted to pay $3 for a soda, but didn't) but I knew what I was signing up for. Can't complain.)

Went straight to fiesta, voted (Go Beto!), came home, and collapsed. (It was only a 3 hour flight and I didn't get up that early, but travel remains exhausting nonetheless.) Then walked with Fuzzy to pick up George's medication (she has an enlarged thyroid and is in a hilarous cone-of-shame to stop her licking a sore, but doing reasonably well for a 15 year old cat), and they built a Rudy's just south of the vet so we had dinner there. Wound up watching The Big Short and talking until 5 am.

It's good to be home.

October 27, 2018

Fade gave me a half pill of her modafinil prescription to see if it helps with my seasonal affective disorder. (It's like 50 mg, half of the smallest pill they make for it.) Dunno if it's the placebo effect, but I concentrated _really_well_ and got a lot of programming done. :)

I made it about halfway through the toybox release todo list in a couple hours, I'd probably have made it through the other half too but Fade got hockey tickets. Women's ice hockey: university of minnesota wave caps vs... somebody else. It was fun. Fade's getting season tickets. (If she wants to date any of the hockey players she has my enthusiastic encouragement, but she's been way too busy to have a girlfriend. Grad school's like that...)

October 26, 2018

Spent today shoveling through stuff at work, trying to get ready to get on my bus to Fade's. (Spoiler: I made it.)

The 9.0.7 OS version on the N40 (I.E. the sh4 box's port from WinCE to Linux, which they're pretending is a dot-upgrade from "9.0.6" to "9.0.7" instead of a complete new category of operating system running the old WinCE code under Mono) is just about ready to ship (alpha test versions are in the field), but there's a couple teething issues left which boil down to "when a box boots up, sometimes you don't know what IP address it got, and thus how to connect to it with the GUI tools, unless you connect to the serial console". Which is not something we ever want people to have to do in the field.

The first reason is our recent domain migration broke the DHCP "update the DNS server with this box's name" functionality because it wants a fully qualified domain name now, but DHCP is what _returns_ the fully qualified domain name so now I have to call it twice and/or remember what it said last time, again defeating the purpose of DHCP and adding a worry about "what if it changes and my cached data is obsolete, how do I know?"... Anyway, if we're seeing this customers might too so I fixed it.

The second reason is I added zeroconf support to the box as a fallback if DHCP didn't work, and there are hiccups. This is not the same as assigning a static IP at install time, that you presumably know. But if you tell it "get a DHCP address" and it can't, it gives itself a zcip address so it has _something_. (The dhcp server really should do this for us, but doesn't. I have a toybox todo item to add it to mine, but dhcp is still in pending.)

The bug is, when it's got a zeroconf address the snmp trap announcing the board doesn't fire because the binary-only module they bought but never got the source to (written in .net and now running under mono) isn't calling the "enable broadcast packets" ioctl() on the port it opens (I guess WinCE didn't need to?), and for some reason it works in the DHCP address range but doesn't in the zeroconf address range. (I.E. it's not that I don't understand why this fails, it's that I don't know how it _ever_ works in the cases it _does_. It's not supposed to.)

But address categorization in the kernel is a horrible brittle mess of historical reasons that should probably be fixed. (If it was we'd reclaim tens if not hundreds of millions of IPv4 addresses, as described yesterday. Just reclaiming "class E" is more addresses than the entire populations of japan, australia, new zealand, taiwan, and korea put together)

Anyway, I can't fix their binary only .net module, and I fiddled with the idea of a kernel patch to disable the check or an LD_PRELOAD library to intercept the bind() and try to recognize this specific one and add the ioctl() call for it, but... ew.

So now I'm trying to write my own SNMP trap generator. It's a single UDP packet that's basically a transport for a string (containing 4 keyword=value pairs, comma separated I think), with about 40 bytes of header. Except I grabbed some broadcast packets to port 162 and displayed them through hexdump -C (the engineering department's full of these boxes, I get a packet every 5 minutes or so) and it doesn't seem to match any sort of standard I can tell? Hmmm. It's _tiny_, but messy nonsense. And all the code I can find to deal with this stuff is bureaucratic nightmare corporate garbage intentionally obfuscating itself. I'm guessing there's money in this protocol, even thought he protocol is trivial, so they try to make it look like a lot more work than it really is. Sigh.

Did not manage to get it sorted before it was time to head to bus. (Gotta catch a plane back to Austin to vote, and I might as well fly from Minnesota as from here...) Handed the SNMP thing off to Rich Pennington.

October 25, 2018

Multicast is useless can _also_ be sung to "every sperm is sacred".

I'm on a mailing list with some people who are trying to reclaim a couple hundred million unused IPv4 addresses and add them to the usable pool. There's some small debris like the fact that 0.*.*.* and 127.*.*.* are each 16.7 million addresses when they _need_ one. (Or in the case of 127.x two because ubuntu stupidly put its loopback nameserver on an address _other_ than, for no obvious reason. Then again, they can fix that themselves...)

But that's a rounding error compared to the fact that the last 1/16th of the entire IPv4 address space (268 million addresses!) is "Class E Experimental" reserved for future use, and since we all say we're living in the future we might as well use them now.

Another 1/16th of the 4 billion total addresses is reserved for multicast, which was never remotely as popular as it was expected to be, and should be reclaimed.

The reason multicast failed is data compression: napster is _exactly_ the sort of thing they expected Multicast to be needed for and instead it used breakthroughs in data compression that could squeeze a whole CD into 1/11th the space, and then MP4 did the same for video, and the _problem_ with those is if you miss a single packet the whole thing's useless. These superior data compression formats do _not_ degrade gracefully, you lose a single packet you're hosed until at least the next keyframe.

And this means youtube, netflix, hulu, prime, spotify, snapchat, whatever google hangouts is calling itself this week... NONE of them are using multicast. Realaudio died for a reason. Multicast is tying up huge amounts of IPv4 address space for approximately the penetration of "gopher".

I'm 99% certain that with an authority handing out addresses to all remaining multicast users worldwide (if there are any outside of hotel television systems which are LAN-local anyway) would comfortably fit in a /16 address range, although two of them (one /16 for public, one /16 for private) might be politically easier. But doing that still gives us 268 million addresses addresses back, that's 128k addresses out of 268.4 million. (The 1 in that 128k lines up with the 4, even a _generous_ reserve for actual remaining usage would be a complete rounding error to the allocated address range.)

People keep talking about how IPv4 "knew it would be osbolete someday" and NO IT DIDN'T. In 1980 that was more addresses than there were people on the planet and you didn't give an address to a PERSON, you gave it to a machine which meant a minicomputer shared dozens of users. You gave an address to a _household_, possibly even to an entire apartment building, and that's before Pauline Middlink invented IP masquerading so your router could provide an entire coffee shop full of people connections from a single IPv4 address. IPv4 _is_ enough for everybody, that's why the switch to IPv6 has been going on for 30 years and shows no signs of accelerating. IPv6 was overengineered and overcomplicated and nobody _wants_ to use it.

I researched this area in the first place because some 20 year old piece of code at work is using multicast for time synchronization. It turns out it's in the NTP and SNTP RFCs from back in the 90's, and they never removed it when they reprinted them. This box is from the 90's, so of course they did that and they don't want to _lose_ this feature even though it makes no SENSE in 2018.

But even in that use, it doesn't route outside your LAN so a /24 multicast address range (256 addresses) would be _plenty_. It would be massive overkill. It goes off less than once a minute, it should almost certainly just use broadcast packets. But back in the late 1990's they decided to use multicast, so...

October 24, 2018

Editing and posting old blog entries and I caught up to the one with Fade's banana bread recipe and I went "Ooh, I still have the goat keffir that substitutes well for buttermilk from when Fuzzy was visiting", and the grocery store 2 blocks away often has clearance bananas for 29 cents a pound...

Of course I'd have to cook it in the gas oven, which... I am not comfortable with the gas oven. It does not seem to oven properly. I do not have an oven sense for it. (I turn it on and there's no noise, no hum, no window to see inside, nothing glows, I have to open it to see if it's warmer inside than outside, how do I know if the pilot light's gone out and it's filling the apartment with gas... ok you can smell it but by that point there's a lot of gas in the apartment which seems problematic...

I grew up with electric, also understand wood, and am interested in learning about other types, but gas is not my friend. It wasn't my friend in pittsburgh and is not now. (And I refuse to call methane "natural gas", which was a marketing ploy by the fossil fuel industry. The "now you're cooking with gas" advertising slogan was delivered by Bob Hope in 1939 on behalf of his sponsors (thanks to advertising man Deke Houlgate), and you still occasionally hear writers put it in the mouth of old people to indicate they're old...) And after all this time, they're still at it.)

October 22, 2018

I'm burning out fast at work. I get next week off and I am SOOOO looking forward to it...

October 21, 2018

Doing the guilt loop about patreon again, where it's been long enough since I've updated them that I want to update but think it should be about something _big_, which means I go do work and then don't finish it so can't announce _progress_ yet...

I got a large mkroot pull request with over a dozen individual commits, which I've looked at the titles of but haven't downloaded and looked at yet. I still want to merge mkroot into toybox, which is a simplified stripped-down version. This takes mkroot in the opposite direction. I kind of want to encourage the guy to fork it, because it sounds like he has plans I don't? But then there's the "simple reference implementation" issue, and I recently posted on the toybox list about copying libraries out of cross compiler toolchains (to create a dynamic build from an arbitrary toolchain) being hard to get right.

Working on the toybox release, and my tree is DEEPLY into the weeds, as waiters say (which I guess they got from golfers maybe?). A bunch of files have three different conflicting changes applied to various parts of them, and "git commit --patch" is very nice but then I need to pull it over to a clean tree and see if the result works in isolation, and if not do "git reset HEAD^1" and try again. And a lot of them were things like "the top setup in ps.c is switched to use start_redraw(), but that function's only in watch.c right now so I need to move it to lib/lib.c before that command builds again, except it should go in lib/interestingtimes.c and I _really_ need to rename that to something shorter (it's my not-curses which is apparently very impolite in china, according to Terry Pratchett who mentioned in an interview that his Discworld book of that name had issues selling there because of the name)...

Wanna make all the changes, gotta sequence them. Finding a loose thread to pull on an unravel the knot becomes an issue. As always, it's a symptom of too many interrupted work sessions I never got back to finish before I'd forgotten where I let off and I have to discover it again when I trip over it later...

Right now I'm tring to finish up adding a "%" type to lib/args.c that fills out a long with milliseconds using xparsetime(). This touches scripts/mkflags.c, lib/args, I changed xparsetime() while I wsa there, and now I'm changing every caller. And then when I test it the build breaks because it's trying to "#define FLAG_%" which says mkflags.c needs more work...

October 17, 2018

So much thread. Today I did NOT send this to the OSI list:

> So I'll try to avoid adding too much more here. :)

P.S. If you want a text version of the "why public domain equivalent licensing" spiel, it's here. And here's data about the growth of "no license specified". (There's more recent numbers but that's the link I could dig up easily. The trend line continued slowly downwards for at least the next couple years.)

That's why I started pushing my license, before you ever heard of it.

If you want what I was trying to do with zero clause bsd specifically, I explained that here.

October 16, 2018

I asked OSI to undo their mistake. It's turning into a thing again. I am sad.

I recently reopened the OBSD can of worms and when the discussion petered out after only positive responses I poked them again to ask about the next step, and Richard Fontanna showed up to block it.

He was the guy who got this wrong in the first place. He was the guy who asked SPDX to retroactively change to confirm their mistake. He's been the only voice defending his mistake, and every other voice now showing up to agree with him has been replying "me too!" to Richard.

I really don't want to get involved in OSI's internal political dysfunction, but it's hard not to be increasingly hostile to somebody who is not arguing in good faith. I've posted five times today already and all I want OSI to do is stop being publicly wrong. (And I've been corrected that OSI didn't _reject_ the Creative Commons Zero license, they were so politically paralyzed that CC0 gave up and withdrew its application.)

October 15, 2018

Back at work, which is intensely interested in getting NTP to work with an SNTP server. (Because Microsoft does SNTP, not NTP, so it's what they were actually using.) I believe we're on week 3 of this, but I've lost count.

Met Fuzzy for lunch at the crab/lobster place from Friday, and she had an oyster flight (like a beer flight but different varieties of live oyster), and "salmon crack" (maple candied salmon jerky), and seems generally quite pleased with herself.

Good thread about Google+ death by middle management behind the scenes.

Google+ is joining the google graveyard and it's about time. Google tried VERY VERY HARD to shove an entirely unwanted thing down people's throats (which people might have liked more if it wasn't shoved down their throats) because it was "strategic" to push back against Faceboot and just buying twitter apparently didn't occur to them or solmething? I have no idea what was up with that. And they did it by BUNDLING, which you could NOT OPT OUT OF, and it was every bit as horrible as when Microsoft did it.

Unfortunately, the damage has been done. Youtube comments were never great, but the Google+ integration sucked and made the comments a nazi wasteland, and even though google backed off after a couple years, youtube has never recovered. Meanwhile, right-wing loons continue to buy any news property with a shred of historical reputation, to aggregate and turn it into GOPravda du jour. (Faux News' cover is blown, so they're adding layers.)

October 14, 2018

Fuzzy went off to her Epee bout. She beat 3 opponents and came in 74th out of 127, which her coach says is great for your first national bout. (Her coach was here because he's competing in his own event.)

I hung out at Starbucks and put together release notes for the toybox release, which turned into a big todo list of half-finished stuff I forgot I hadn't tied off, and I wound up delaying the release again.

In the evening she made Scones with the Goat Keffir she found at the grocery store. (It's liquid yougurt made from goat milk and we used it as a buttermilk substitute.) They're very good.

October 13, 2018

Went with Fuzzy to find her Stabbing venue, which is down the street from the semi-dead mall I took that Japanese course in.

Today's mostly fencing, epee's tomorrow. I hung out in their food court and did a little programming, but it wasn't very comfortable and I packed up after an hour or so. Went home and slept on the actual bed until she got home. (As the guest she gets the air mattress, which means I'm back on the sleeping bag on a very hard tile floor. Lots of tossing and turning, not a lot of sleep.)

October 12, 2018

Friday, John T's last day at work. He was the team lead on the N40 (sh4 board) project, and the domain expert on the LON chip in it. This is a chip that if misprogrammed, will physically destroy itself, so without him we probably can't implement that feature. (His kid just had kids so he's off to Florida to be a grandparent.)

Took Fuzzy to the all you can eat sushi place for dinner, which has crab legs on the buffet. She approved. I ate too much for the second day in a row.

October 11, 2018

Fuzzy's plane came in this morning, which was a surprise because I thought the 11th was Friday, not Thursday. (My bad.) Spent the morning frantically cleaning my apartment, then stopped by work to apologize for missing a meeting, then took a bus to meet her at the airport, escorted her back to the apartment, dropped stuff off, toured the grocery store and Starbucks near home, went back towards work where I left her in the used bookstore with the giant orange cat, went back in to work, apologized for missing _another_ meeting (I only had one for the day on my calendar, turned out to be the afternoon one but I thought it was the morning one I'd missed), did 4 hours of work, then met Fuzzy at Milwaukee Public Market to eat All The Crab and a lobster. (Well she had that, I had fish and chips.)

Busy day. I got zero programming done.

October 10, 2018

Finaly got the watch.c rewrite checked in, although there's more to do on it. (It doesn't handle ascii escapes at all right now, just drops the ascii 27 so it prints out [35m and such.)

The crunch_str() plumbing was written when I thought combining characters were _before_ the character they applied to, not after. So it needs to keep printing until it finds the next character it could _not_ display. The tricky part is it consumes an input string and returns how much input that was and how many columns were advanced, except if combining characters are _after_ the characters they apply to, "columns advanced" and "we're done" are conceptually decoupled. It can go "yes, that was 80 columns" and still need to evaluate more input before confirming it's _done_.

This also means that a terminal displaying this has to redraw characters it's already done to _add_ combining characters, which may change the color of the character (complete redraw) and so on.

I.E. utf8 is very clever; Unicode is really stupid. Need to figure out how I want to handle that... (Even conerting to a "number of characters left" metric, it needs a "1/2 a character left" state to say it's advanced the cursor all the way but may still need to output trailing combining characters from the next batch of input. I think it would need to go from 0 to -1 to indicate that with integers. Great. I need to change every caller, _and_ add test cases for the edge case of "trailing combining characters at the right edge of the screen with disjointed input".)

Interesting: "more" exits when it runs out of input, so if you pipe less than a screenfull of stuff to more it acts just like cat. My test case was, of course:

for i in 1 2 3 4 5 6 7 8 9; do echo $i; sleep 1; done | more

But less _doesn't_, even just "echo hello | less" clears the screen and waits for you to hit "q".

But when "git diff" produces less than a screen full of output, it doesn't pipe it to less. It only pipes the output to less when output it produces would scroll off the screen otherwise.

I thought less had to be smart (less, screen, and vi all use the infrastructure I'm building for watch), but instead git's being complicated.

October 9, 2018

Andy (one of my 3 bosses at work) asked if I want to extend the contract another 6 months. I'm torn. Pittsburgh's not a bad city but I don't know anyone here, my job isn't exactly world-changing (it's not _useless_, just not my idea of particularly important; it's basically very fancy expensive thermostats), the average age of my coworkers is probably 60, and I'm in a cubicle in an open floor plan. That said, the money's quite good and I should probably work on that "retirement" thing a bit while I have the chance.

My toybox work is never going to pay the bills (Google's given me their "Open Source Award" with a $250 gift card twice, which was nice of them), and although Jeff says he's going to have money in the new year to hire me back give how often he's said that before I'd give it a 20% chance of actually happening.

Andy pointed out that I could take a full month off (second half of December and first half of January) before coming back, which moved the idea from "I don't think I'm up for it" into "tempting" territory. Huh...

October 7, 2018

So NDK R18 is out, and I should probably test with it... 2 hours to download with my phone. Lovely.

Meanwhile, fixing up the x32 symlink issue in mkroot (tail end of the mcm-buildall rename support), and fixing a build break in the timeout command for the x32 target due to the posix committee doing an insane "hallucinate a new bespoke type" thing...

Ok, building toybox with the NDK seems fixed now except for needing llvm-cc symlinked, and the liblog thing (only, no liblog.a, so --static breaks the build). And the second of those I can fix up in portability.h (Elliott says the version in the NDK is just a stub anyway).

(I'm told building a kernel with the NDK is unlikely to happen any time soon. I need a lot more space cleared before opening that can of worms. Might get mkroot's userspace to build, though...)

October 6, 2018

Cycling back to toybox release prep, tried to build toybox with the x32 compiler and "timeout.c" broke because the microseconds field of struct timespec is not a long. They introduced a gratuitous suseconds_t type for NO OBVIOUS REASON. (It's microseconds. Goes up to a million. Fits in a 32 bit int, but here it's "long long".)

If this was time_t I'd just change the type, but it's NOT time_t. Why is it not time_t? If you're going to bitch about alignment constraints (can't have the struct be 12 bytes, it must be a power of 2!) then why have a second gratuitous type just for the subseconds field? Struct timespec (measuring nanoseconds, going up to 1 billion) uses long and struct timeval (measuring microseconds, going up to 1 million) makes up its own type. What is WRONG with the posix committee? (Other than "Jorg Schilling continues to exist", I mean.) Look, 1 million will NOT fit in a short, fits in an int. 1 billion will ALSO fit in an int. Making these different sizes is CRAZY...

I'm sorry, I'm not humoring this.

Humans can just about perceive milliseconds (60mhz screen refresh is about every 17ms, MIT study found an image needed to be flashed for 13ms for a human to process it), anything below that's noise to us. Computers are clocked in nanoseconds (1ghz = 1 per nanosecond but they generally take multiple clock cycles to do anything with a bunch of jitter). So time functions based in each of those units has reason to exist. Time functions in microseconds, such as adjtime(), suit neither use case and just clutter up the place. (I know they wanted a wider range in 32 bit numbers than nanoseconds would give them but it's still just + or - 35 minutes, go ahead and use 64 bits already.)

October 5, 2018

Made another attempt to get qemu to run a Hello World kernel, and it's being stroppy. In theory qemu's -kernel loader can handle ELF files (that's what vmlinux is), and you can write to the serial port with a two line loop, so if your entry point is a loop writing characters to the serial port...

In reality, I haven't gotten it to work yet. I've tried:

echo "void _start(void) { *(volatile unsigned char *)0x81093025 = 42; for(;;); }" > bork.c
powerpc-linux-musl-cc bork.c --static -nostartfiles -nostdlib -Wl,-Ttext=0xc0000000
qemu-system-ppc -M g3beige -m 256 -nographic -kernel a.out

I should see an asterisk (writing ascii 42 to the register), but I'm not seeing an asterisk.

I'm using ppc because that's the mkroot target I've got using a vmlinux kernel already. That boots and runs. But this doesn't. I think there's linker magic afoot. (Or else the virtual serial port needs more setup?) I want to get it working on qemu before trying with j-core on the numato board, but it's not woring and it's hard to see why. (And of course at some point qemu -s broke so it doesn't start suspended, but runs off into la-la land before you can ever attach gdb. Why?)

October 2, 2018

Haven't got my email set back up yet, but I've got the files copied over and more or less sorted. I'm using the giant System76 laptop from 2013 (the one I've been doing toolchain builds and such on, since it's 16 gigs ram, 8xSMP, and has a terabyte disk, it's just physically ginormous. The _new_ ginormous System76 box still has a distro with systemd on it, and is thus effectively a brick.)

Back poking at watch.c: I'm implementing the cursor tracking logic, and the question "what does a tab do if the xterm width isn't divisible by 8"?

First thing is xfce's "terminal" does not display an accurate width when the window is small: it says 10 and it takes 16 characters to wrap. (Bravo. I think changing the font size confused it?) So expand it to 14 (actually 19), and echo -e 'a\tb\tc\td' produces "a b c d".

Huh, with 80 column lines (after printing out 8 groups of 0-9 to confirm it _is_ 80 characters), if I repeatedly print "a\tb" the first tab advances 7 characters, the second and later advance 6, and then the last one advances 5.

I went to linux text console (ctrl-alt-f1) which measures 170 columns for some reason (framebuffer, I guess), and doing "\tx' over and over ends with two consecutive x characters. Which makes sense: 170/8 is 21, 21*8=168, but tab advances you to the start of the _next_ block of 8 characters, so the last tab is position 169. And then if you tab from there you wind up at 170, I.E. the tab is ignored. And you can tab multiple times at the right edge of the screen, it'll never advance you to the next line.

Tabs are fiddly.

Next problem: when I wrote the crunch_str plumbing I thought combining characters went _before_ the printing characters, which would be the sane thing to do so you immediately know when you've hit the end of the line. Alas, utf8 and unicode were not done by the same people: unicode combining characters come after the printing characters, which means you don't know when you've filled up a line until you've read _more_ data than will fit in the line. So you have to overshoot and parse at least one character more than you can print to know when a line's done. (It also means that when rendering, you redraw the same characters multiple times when data comes in slowly.)

(Again, this isn't _just_ to make watch work. I need to write vi and screen and such too.)

October 1, 2018

Slight hitch in the plan to ship a toybox release before today: my netbook died over the weekend while I was in Minneapolis visiting Fade. It suspended before lunch on saturday and resumed to a black screen after lunch, wouldn't re-suspend when the lid closed so it wasn't the backlight bulb burning out.

I had a professional frown at it, and got an official professional "dunno". The bios isn't coming up, which makes further debugging difficult.

I'm back in Milwaukee now where my other machines are, and a few weeks ago I asked Fade for a usb hard drive cable (she's the keeper of the amazon prime account), and I'm using it to copy the files off the netbook's drive. I got lazy, my most recent backup is 3 weeks old, so of course it happens now.

September 28, 2018

And friday. Where did the week go?

I proposed to OSI renaming the free public license 0BSD, and today it's gotten positive responses from 3 different people, including, inexplicably, Bruce Perens.

...I'll take it?

I composed, but did not send, the following response. On the theory that "selling past the close" is a bad idea. (If things are going your way, _let_them_.) That said, posting it here:

On 09/27/2018 09:07 PM, Rick Moen wrote:
> Please pardon a bit of legal pedantry, but 0BSD is not 'equivalent to
> placing code in the public domain' (as you say on
> and elsewhere).

I usually say "roughly equivalent" but link to the wikipedia page called "public domain equivalent license". I'm aware reality is insanely complicated, and am trying to present a simple story I can use to encourage individuals to license their code in a way corporations don't immediately reject.

> Copyright title
> continues to exist in the abstract ownable property in question from the
> date of creation to when copyright expires -- whereas the defining trait
> of actual PD is ownable title having expired or been expunged.

And in europe there are inalienable moral authorship rights...

> You might feel that this distinction doesn't matter: We would all hope this
> is the case.

I feel it confuses the story to try to explain the difference between copyright, contract, trademark, patent, trade secret law, why you want to avoid the eastern district of texas, the horror that is the "United states Court of Appeals for the Federal Circuit", let alone international jurisdictional issues...

I am entirely aware I am not a lawyer, let alone an IP laywer (at least as much domain expertise there as kernel vs GUI vs crypto), but I could give quite a long prep talk to someone about to meet a good IP lawyer.

> The assurance third-party reusers have that they aren't
> committing copyright torts is that they're doing so in good-faith
> reliance on a copyright notice with permissions grant, knowing they're
> either inside the copyright runtime but exercising that grant or past
> the copyright runtime in which case the work is _truly_ public domain.

The need for a license is because you can't just "public domain it", yes.

> And another reason this isn't equivalent to PD is that the legal
> obligation[1] to retain the copyright notice (e.g., the 'Copyright (C)
> 2006 by Rob Landley ' example on your toybox page)
> until copyright expires -- which for reasons mentioned above is for
> everyone's benefit.

The Berne convention says works are copyrighted even without explicit notice, and the Apple vs Franklin lawsuit extended that to cover binary computer code in 1983. The requirement to keep specific notices was the half-sentence I removed.

The internet is really good at finding plagiarism, and between github and proving you published first and the thing is yours isn't usually a huge deal. The larger cultural issue is "attribution" vs "ownership", but legally demanding attribution leads to visible watermarks on photos and such, when what you really want is not plagiarising:

The difference is pretty obvious to the younguns

> Yes, we get that coders would like to magick all of this hassle away --
> but wishing doesn't make it so. Your licence, CC0, MIT License,
> ISC License, and Fair License are IMO about the closest one can safely
> get that I've so far seen.

That's kinda what I'm going for, but MIT and ISC are public domain _adjacent_ licenses that have the "stuttering problem" leading to multiple duplicated copies of license text.

(I asked and they said "the copyright dates changed, and a strict reading of the license requirements said... I've used that specific file as an example often enough they changed it in master, but the problem's all over the place. The kindle paperwhite had 300 pages of this nonsense in its about->license pulldown.)

That said, half the law is precedent and common usage. Here is a very minor change to a license in widespread international use for decades already, which explicitly claims kinship with one of the most widely known license families, just about the only one other than "the GPL" with instant name recognition even among newbie programmers, and the one which the "GPL vs BSD licensing" axis is named for.


September 27, 2018

The In Nomine game was cancelled this week, so I get an unexpected programming night.

I need to print a known number of unicode columns for watch to display text without unexpectedly scrolling the screen. This is a similar problem to the top command, but that escaped all nonprintable (combining) characters to provide an unambiguous representation, and here I want them to do their thing.

This led to a thread on the list with me asking how combining characters work. (They go after the character they apply to, not before.) And that seems doable. I still need to escape nonprintable ones (invalid sequence and unknown code point).

The fiddly part is low ascii characters, which aren't standard. The xfce terminal displays some of them as boxes with a hex number in (unprintable codepoint), with width 1. (Well, it advances the cursor by 1 but displays a little wider, and they visually overlap with the next character; close enough). The Linux built-in text mode (ctrl-alt-f1) doesn't display anything below 32 (space) and doesn't advance the cursor either. Most of them are just ignored. (Everybody ignores NUL, although I'm told it's because commands have a surprising number of off by one errors and output null terminators all the time. Toybox doesn't. :)

I need to beat crunch_str() or similar into something that can display this, but... tabs advance the cursor by a variable amount based on the current X position. The \b character goes BACKWARDS. \n moves down a line, \r jumps to the left edge... the behavior isn't just "how many columns do we advance by". And then there's ANSI escape sequences...

I need to teach the crunch_str escape callback to return -1 meaning stop early, and let the caller deal with it.

September 23, 2018

Another busy week at work. They're all busy these days. Money's good though. (Slowly paying down the large home equity loan I took out to buy another year riding down SEI, after multiple consecutive "flood of the century" cleanups wiped out my savings.)

I dug up an old Numato board to send to Johann-Tobias Schaeg, who plans to port the Tron t-kernel to it. (A Japanese RTOS.) This is something I generally want to encourage, so was worth a few cycles digging my numato board out of the closet (I have 3 but only brought one to milwaukee; I still have the turtle board prototype and an EVB to run j-core stuff on), get it reflashed and booting Linux.

Took a while. I'm a bit rusty on j-core board setup it seems. :)

September 21, 2018

No, the reason stack measurement was using signed is it can grow in either direction and thus produce a negative result, which you have to take the absolute value of. We care about the _distance_ between the two, and signed twos complement math gets that right even if it wraps around.

All those "oh noes, the math wraps, let's optimize out the entire if() statement!" C++ bigots can get hit with another -fstop-doing-that compiler flag. (The correct response to most "undefined behavior" is to DEFINE it. If your optimizer can't handle it, then I don't get that optimization. I already _know_ my code could be slightly faster if I "did it another way", such as rewriting it from scratch in assembly. Don't care, I'm balancing several other factors to produce the design I want to produce. Bring me your portability arguments when your theoretical new toolchain can _also_ build the Linux kernel, I spent 3 years trying to extend tinycc to do that and I think I have a reasonable grasp of the problem space thank you. "The compiler can do two's complement signed integer math" is already a capability required to be present by posix, why do you think they're going to implement _more_ than one way to do _subtraction_?)

September 16, 2018

Cycling back around to toybox watch.c. It's hard to stay motivated with politics (the oil companies, boomers, patriarchy, white people, and possibly capitalism are all on the downswing but their death throes are trying _realy_hard_ to take the country and possibly planet with them), the EU Copyright directive is a disaster, the C++ people are doing attack du jour on C. (I stopped paying attention after Stroustrup declared C obsolete, which was about the same time he declared linked lists obsolete. Projecting much?)

Anyway: watch.c. The header it generates includes ctime() output, and the man page says ctime doesn't internationalize so length is always 24, but I'm not quite comfortable trusting multiple libc variants to be consistent about that. For one thing, the string it returns ends with a newline. (Why?) This is probably heavily specified and I _could_ confirm it's stable, but the strlen is cheap (stays within a cache line if it's aligned) so... eh.

Fluffing up the roadmap, which needs more work. I want to cut a release at the end of this month, and I've only got a couple weeks left to do that.

Rich reminded me to take another look at the stacktop logic. I think the reason I didn't make it unsigned the first time is that userspace can't put the stack in the upper half of the virtual address space (which is reserved for the kernel), and if you add two "unsigned"s to the typecast it doesn't fit in 80 characters. (And even if it did cross the midpoint doing two's complement math it'd get the answer right _anyway_, it's comparing two pointers on the same stack to see how much stack space has been used, the prorammer knows something the compiler apparently doesn't and the typecast is to get out-of-control optimizer to stop messing with stuff it doesn't understand.)

But yeah, technically Rich is correct. So I went in and added the unsigneds, and then verifying those lines were correct... if stacktop is zero we _don't_ return early (and thus call the local function), but stacktop being zero means we vforked(), in which case we _shouldn't_ recurse, so...? (Answer: xexec() in lib/ is also checking stacktop, this check here is so that on our first run we don't spuriously think we're out of stack.)

The problem is I'm not putting enough work in on toybox to keep it all loaded into my head, so when I go back and poke at stuff I wind up spending a lot of mental effort to re-create my original reasoning. It's a bit like maintaining minimal airspeed to stay airborne. Below a certain amount of development I spend all my time reverse engineering my own work, at least when trying to complete partially implemented commands or poking at the core infrastructure, anyway.

In this case, looking at the rest of the stacktop checks I _think_ the _exit() at end of xexec() should be testing if (!stacktop) instead of if (!CFG_TOYBOX_VFORK), but... was it an oversight or was I thinking of something subtle I didn't document? I don't _think_ I was, and if I didn't I should %*(#%&# add a comment now. But it's been long enough that I don't remember why I did this. It looks like a mistake. I think it was a mistake. It's only in an error path, so probably never got much testing, and the effect is subtle (mostly missing fflush(0) and possibly sigatexit() cleanup not getting called... toysh didn't get implemeted enough for shell builtins to matter yet)...

Sigh. Back at LinuxWorld Expo in 2000 or 2001, I bought the boxed set of recordings of every single panel that year (and damaged my hearing listening to them on the 36 hour drive back to Austin: the recording quality was terrible so I turned them way up to hear what people were saying over the road noise...) Anyway, one of the great talks I remember was Jon "Maddog" Hall talking about a minicomputer backup job that went to reel-to-reel tape, and consumed a dozen spools each job (manually switching them when each one filled up so an operator had to stand by and babysit the process), until he fixed it to write data out in 4k blocks intead of a byte at a time. (Because the tape machine would spin up to speed skipping/erasing tape as it did, write a start-of-record, write the byte, write the end-of-record, and spin back down again skipping more tape.) And how just getting the blocking size right made the performance SO MUCH BETTER that it all fit on one reel and an operator didn't have to babysit it, just change it once a day.

Somebody publicly hits that same general problem every couple months, and I want to refer them to that talk but of course the recordings were never posted online. But right now, I'm hitting that same problem in toybox development, and it sucks. I need to work in bigger chunks, and I haven't got them, so I'm swap thrashing. (And my limiting factor isn't exactly time, it's energy. I need to rest, commute, cook dinner, deal with email and such, and do toybox all in the same downtime.)

I was drinking 3 energy drinks a day to keep up with this, but stopped because of health reasons. I'm told there are adult ways of coping like "scheduling" but I've never been particularly good at deadlines that weren't externally imposed. Hmmm...

September 14, 2018

Thalheim lets me ssh into a fast server, which just got reinstalled. He set up a VM with alpine-virt-v3.8.1 in it, and I need to repeat the setup to turn alpine into a development environment: run "screen", then "apk update", then "apk add bash screen git make gcc g++", then "git clone"...

September 11, 2018

Fighting in Afghanistan "because 9/11" in 2018 is like still fighting Japan because pearl harbor in 1958. It's been 17 years. The US participation in vietnam lasted 19 years and 180 days. We're less than 3 years from passing that.

Conservatives keep fighting to recapture a glorious past, I.E. roll back the clock to what they imagine the past to be, the days when nobody questioned a white man literally owning women (through marriage) and black people (as slaves), and when literal ownership broke down still dominating the hell out of both groups by leveraging residual historical advantage in laws and finances and property ownership and political representation and education and social expectations... I didn't expect "the heady days when we were endlessly fighting in vietnam, and korea, and wars we had to _number_ in europe" to elicit the same level of nostalgia, but here we are recreating them when younger boomers once knew better. Of course back when they _did_ know better the boomers said to stop trusting them once they turned 30...

September 9, 2018

Poking at "watch" which means I need scan_key_getsize() to probe terminal size which means I need sigatexit() but the sigatexit() plumbing is using signal() rather than xsignal() (which is a wrapper for sigaction) and signal applies all signal handlers with SA_RESTART which is a bad thing if you're trying to unblock a read(), but I think poll() doesn't SA_RESTART? I read manual pages about this a couple weeks back...

Lotsa fiddly stuff, and "should I replace signal() with xsignal() in sigatexit()" is the kind of question that requires not just research but auditing all the existing users to answer, it eats a couple of good hours of focus. So it goes on the todo list. I have so many things like this on the todo list. (People keep asking me to post my todo list(s) publicly but collation issues aside, I just expanded the 4 word note "signal->xsignal in sigexit" to a pargraph above and I'm _still_ not sure what to do about it sitting at a keyboard ready to edit the code.)

Watch also needs xpopen_both() and an output parser, so I need to poll() and then call scan_key_getsize() which will also poll, but I think if there's no read() between the two the second poll will return immediately because data is still ready to be read, so it's a NOP? Seems like there should be a library function for this but I dunno what it should look like yet...

But it's Sunday, and tomorrow morning I go back to have Johnson Controls drain all my energy. This weekend I just about caught up on... email.

Having a double life was easier when I was younger. Well, depended on the job I was doing, I suppose. I couldn't do it when I worked 90 hours a week at IBM (being paid for 40)... And I remember from Pace and Polycom that "sitting in a cubicle" is far more draining than any actual work I do. (Always was. I hate cubicles. You wouldn't think it's possible to combine isolation with lack of privacy, _and_yet_...)

September 6, 2018

Linus had some excellent posts to lkml about compatibility last month. I collected links here, and I agree with what he says.

I did _not_ bring up Greg KH's disingenous moves about sysfs long ago. I don't really post on linux-kernel much anymore, and try to stay out of the politics entirely. But once upon a time, I was... more idealistic? Less disillusioned?

Way back when, sysfs was an undocumented mess that kept changing and breaking my code that tried to parse it, so I tracked down its maintainers at Ottawa Linux Symposium (Greg KH and Kay Sievers, who were nice enough in person), sat with them for an hour asking questions and taking notes, then wrote some documentation.

Greg reacted as if I'd done a "don't ask questions post errors" thing and rushed out his _own_ documentation to head mine off, but his didn't document the parts I was trying to use (in busybox mdev). So I engaged to try to understand the technical difference... and quickly figured out that Greg's actual motive was being territorial, and trying to discourage me from touching his stuff. Greg called my code/docs "total nonsense" and was basically repeatedly insulting of the _concept_ of a stable API. His response to my "what is the stable API" drumbeat was to write a document that boiled down to "anything we don't explicitly say is stable in my sysfs which is mine and nobody else's could change at any time, only use the parts we explicitly tell you here are ok" and then was a list of don't do this with no yes you're allowed to do anything specific parts. I.E. it was a smokescreen to disguse the fact he didn't want there to _be_ a stable API.

In this thread Greg was repeatedly condescending at me, but always with a smileyface after it, and asked me why I'd _need_ to document the kernel api because the non-busybox userspace package he'd written (which he insisted people needed to update every time they upgraded their kernel) "already solves this problem for you, it's not like people are going off and reinventing udev for their own enjoyment"... (Later in the same message, "To do otherwise would be foolish :)" always with the smileyfaces) but when I was snarky back _once_ (in a message the archive seems to have dropped) his reply was to get his fe-fees hurt and use it as an excuse to stop talking to me at all on the topic (Did I forget to add a smiley?), and even when I gave him his demanded apology he never replied on the topic again.

The topic being "why is there no stable sysfs API Greg KH and Kay Sievers don't break every third kernel release?", and he was glad not to have to defend his desire to treat sysfs as a private API to only be consumed by his magic udev code which nobody else should ever touch because it was HIS.

Years Later Linus finally got fed up with Kay, but Greg continued to rise within the ranks and pass Alan Cox and Andrew Morton to become the #2 guy who took over when Linus took a month off recently. I boggled a bit that Greg was the guy who took over during Linus's month of self-inspection and added a proper "Code of Conduct" to Linux even though Greg had been the guy who authored and promoted the "Code of Conflict" it replaced. But as always the important thing wasn't what he was doing, but that he was the one doing it. He's quite the politician, whose core principle seems to be "power and control over the Linux political process". (Which is an upgrade from "power and control over the subsystems he maintains" I guess? The important thing is that Greg is the one telling kernel developers how to act, not what specifically he's telling them to do.)

I refer to Greg as my "nemesis" for a reason. But mostly I stay out of kernel development these days, because I don't enjoy the politics.

September 5, 2018

I dunno what to do about kconfig in toybox.

My copied version is from like a decade ago now, and one of my pre-1.0 polishing todo steps is rewriting that plumbing to not use the kernel stuff (which is GPL, although it doesn't ship so the binary isn't).

But upstream has gradually complicated kconfig and recently it became TURING COMPLETE, which is just sad. I'm using the format, but I'm not using the _current_ format, and don't want to go where they've gone. But over time, people see kconfig and want to add "default $(shell, rm -rf /home)" and I ain't supporting that.

So... so I explicitly define/document a simplified subset? Do I rewrite all the command header blocks in a new format? Hmmm...

September 4, 2018

I got pointed at a discussion of mkroot.

Thalheim says I should strip the binaries. The other toolchain mentioned is built with -s in $LDFLAGS which presumably also strips the libraries, which I don't think I want to do? (Well, maybe in the native compilers?) But that can be done after the build anyway.

Still need to get x32-native working, but the call of Skyrim is strong and I spent all evening playing that. (Pointy, my high elf, has eaten her second dragon and has been sent to loot a tomb by the greybeards. I assume this is how Lara Croft got her start too. And possibly Indiana Jones.)

September 3, 2018

The mini-target within walking distance of Fade had a switch. I took her copy of skyrim and played it on the whole (6 hour) bus ride back. (I doubt the battery lasts that long but greyhound has outlets, and sometimes they even work.)

This could have a negative impact on my open source productivity.

September 2, 2018

Fade has a Nintendo Switch, which is the successor to the Wii. The nieces have a birthday next month (well, a week apart), so I bought them a switch to share and skyrim (and a mario fights rabbits game, and a scribblenauts successor).

The switch is portable skyrim. Years ago I lost _months_ to skyrim, and this has all the upgrades since then. Fade bought skyrim for her switch (she's been playing octopath traveler), and I spent pretty much the rest of the day playing skyrim.

September 1, 2018

Visiting Fade, spent the day at the Minnesota State Fair. Ate many things on sticks. (Cheese on a stick turned out to be a corn dog with cheese instead of the hot dog. Ok then. The chai malted and cream puffs were not on a stick. The cotton candy was sort of expected to be.) Saw many people being judgemental towards many animals. Learned why the Sheep Burqua exists.

Many displays in praise of non-animal facets of agriculture as well, which were inside and out of the sun. There was an agricultural view of renewable energy (they're all for it), including people standing next to electric cars who could not provide me any updates on app-summonable self-driving car services, people standing next to solar panels who didn't know current prices or manufacturing information or municipal synchrophasor deployment status, and people enthusiastic about wind turbines. (Yay wind turbines.)

(I am totally spoiled by Linux conferences where the people who know more about this thing being there and ready to explain it to me is sort of the point.)

The lines were longer than the last time I was at Disneyland. (Which was admittedly a while ago.) Including the line to the bus to get home. (According to twitter, the guy who did the Gallery of Regrettable Food was in the bus line at the same time we were. I'd say it's a small world, but that's Disney again.) We decided to take the ski lift back to the bus parking lot, which probably added an hour to our stay and reminded me that I am capable of being acrophobic with enough provocation.

August 29, 2018

Haroon is back trying to clean up fold.c. Well I'm glad I didn't discourage him?

I should be better at mentoring, but I haven't got the bandwidth to do even the work I want to do off in my corner, polishing my little balls of mud until they shine.

August 28, 2018

I had to power cycle my netbook again. It ran out of memory and the OOM killer didn't trigger and take out enough chrome tabs to unfreeze the system. (It's always chrome tabs. I open hundreds of them because I never look at bookmarks again, and chrome is a memory pig that hasn't got the brains to save a tab to a zipfile and re-load it when I switch back, they all have to stay memory resident for some reason.)

I hit ctrl-alt-f1 repeatedly but never got to a text console for about 3 hours before giving up and power cycling it. The mouse pointer moved twice during this so I know it wasn't paniced, just swap thrashing. (This netbook hasn't got a disk light. Or bluetooth.)

Sigh. The OOM killer was created for a reason. The clowns who keep sabotaging it because it might kill the _wrong_ thing result in full reboots where you have to kill _everything_ including the kernel. When you find the "only way to ensure fairness is to blow up the world" burn the village to save it types, please escort them quietly off a cliff edge somewhere.

August 27, 2018

The bc code contributed to toybox has a lot of drama in its development history, which made me reluctant to touch it until I understood that there appears to be a single troll responsible for said drama, who was kicked out of the project.

The troll is back, on the list, as a sock puppet. Wheee.

August 26, 2018

Grinding through the style change for GLOBALS().

I should be working on the watch cleanup, but this is easier to do in small chunks with little brain.

August 20, 2018

I was listening to a talk Grace Hopper gave (the audio's glitched in 2 places, but it's like 5 minutes total of an hour and a half talk). About half her talk is repeated in another talk she gave, which has a much better recording. (And a little bit of the same material in a CBS interview with her.)

Around 57 minutes into the first talk, she said that she worked at univac and some other guy worked at IBM and they couldn't collaborate directly on Cobol development because it would be antitrust, so they had to go through either a university or government program to collaborate.

This explains so much. That's why MIT was involved in Multics development.

She's also the source of the phrase "It's easier to apologize than get permission". (I've always heard it as "get forgiveness" but she didn't assume.) Her talk about slack at 1:05 may have inspired the Church of the Subgenius.

August 19, 2018

I recently replied to a cleanup attempt on the toybox list in a way that makes me A) guilty, B) wonder why nobody is teaching C these days. (It's a portable assembly language, you either need it or something _like_ it and the myriad attempts to replace it are like attempts to replace TCP/IP. It does that it set out to do very well. What it set out to do is _inherently_ icky, and people keep rediscovering the "inherently" part in hopes it won't be true from _this_ angle.)

I don't mean to be harsh when replying to newbies, I'd _love_ to teach more people C and bring more developers up to speed on toybox. But my ability to review their stuff draws from the same well as my ability to write new stuff, and there's a fairly high level of skill required before their contributions are a net positive, and I'm starved for time/energy already. When a cleanup attempt leaves the code _less_ clean, I have to give some pushback, and as gentle as I try to be it's still long string of "no, this is wrong for reasons you'd need a half-hour of background to understand the context for"... I'm trying to figure out how to make that encouraging, but do not have the social skills to pull that off.

I'd love to spend time on education, but when taught night courses at ACC it paid _less_ than writing ~2500 words/week for a website did, and nobody wants to sponsor toybox work, so I have to work $DAYJOB, and that eats up 8 hours a day 5 days a week and the lion's share of my energy. I'd love _not_ to have a day job, but I'm not independently wealthy, basic income isn't a thing yet, my patreon is heartwarming but pays less per month than I make in an hour, and corporations will never pay for anything they get for free (even if it takes them ten years to get for free what they could pay to get in two).

Still, I engage with contributors. I try to teach. The probably won't be back on this project, but maybe they'll stick around in the larger community and level up and find somewhere to help out. Which means I put even fewer cycles into toybox itself...

Grumble grumble. Wanna do more. Failing to do more. Hmmm...

August 18, 2018

Bought a bike today, at the bike place in the sort of shopping mall thing west of the river (near the Japanese class and that farmer's market small). It's an ancient Schwinn single speed, the "peddle backwards to brake" kind I like. There's a sign downstairs in my apartment building about cleaning out the "bike room", which implies this building has a bike room. I should ask about that, my broom closet apartment is tiny and on the 4th floor, keeping the bike in it is awkward.

Cleaning up old blog entries: 2016 stopped where it did because shortly after is a big todo item I left myself, to write up a history blob (a proper writup on Why It's Not "Gnu/Linux"), which is several hours of work. (Thanks past me.) There are actually two semi-incoherent tries at it on two different days that overlap a lot and need to be consolidated. Probably I should catch up on 2017 and try 2016 again over christmas break. (Modulo I dunno how much time I should take off yet, I'm a contractor who doesn't get paid for time I don't work...)

Anyway, cleaning up the blog entries at the start of the year. Some things are unsolvable (I didn't write down the URL for the video I was watching on January 7, and can't find it now), others are just time sinks.

January 6 left me two toybox todo items. The first is converting xparsetime() to return milliseconds like millitime(), which now that I look at it might not be a great idea, the two existing users of xparsetime() fill out "seconds, fractions of second" structures (sleep.c uses nanosleep() and timeout.c uses setitimer()) which is what the existing API is for. I could make an xparsems() wrapper, but it would currently have just the one user (ping.c).

The other todo item is the environ_bytes() stuff, and iterating through the users I see I still have local changes to find.c that were sort of aiming at that, except... what was I planning to do? I moved TT.max_bytes to the start of find's GLOBALS() block so it can be set from command line options, but I didn't add an option. Probably because I dunno what it should be called?

The problem is that "find -exec +" collates together arguments until it's got "enough", and enough is not well defined. In theory it's "as many as the exec will succeed for", but figuring that out is kernel version dependent. When you exec() it copies your new argv[] array and your envp[] (environment variables) array into the new environment space, and fails if it's too big. How big? It used to be 32*4k pages (131072 bytes), then it was effectively unlimited (each string was allowed to be 131072 bytes and you could have 2 billion each of argv[] and envp[] entries), then a few years later they added an utterly arbitrary 10 megabyte cap on the total size again. And the kernel has no way to _query_ this so sysconf() returns whatever constant's hardwired into libc(). I bothered Linux Torvalds himself about this last year, and rather than adding a way to query the kernel he said to be conservative and just use the old 131072.

Except if you inherit an environment variable space full of crap from your parent, you can have more than 131072 bytes of envp[] and you _can_ still exec new stuff no problem (real current limit is multiple megabytes), but your size check will fail.

My solution was to do one string at a time if you fail the check, and then if the exec() fails that's a real failure. The batching is conservative but it should still work when possible.

Giving people the ability to override this still seems like a good idea? But the gnu/dammit version of find hasn't got a command line option for this. I doubt environment space filling up ever occurred to them. So I'd have to invent a new command line option, and I hate --longopts without a corresponding short option, but I don't want to conflict with a letter they'll go and use for something else later.

This is why doing toybox work in small chunks is so hard: the code fills up with half-finished things I haven't checked in yet where I stopped because I need to think about a design point, and when I re-encounter them I have to sit down and work out WHY I didn't finish it and what the weird corner case I hit was, and then I'm still faced with whatever speed bump issue I didn't have time/energy to work through in the first place. (They're not unsolvable, they're just a high enough threshold that I need time and focus to find a good answer. And my tree gets covered with them to the point I start working in a fresh checkout and leave the old tree to "get back to", and then never do...)

The January 8 blog entry has another todo item I forgot: oneit shouldn't reboot the system if it's not pid 1. (Because chroot.)

August 17, 2018

Posted a DMAEngine status report to the kernel list, documenting where I left off for posterity. (I'd happily try to push it upstream except I haven't got a spare 5 years to uncircle the kernel clique's wagons again.) Alas I only got the controller side working before time ran out and we moved on to other stuff...

Meanwhile, toybox! Where did I leave off... "git diff" shows I was partway through genericing microcom plumbing, which has its own variants of set_terminal(), and serial speed setting code also present in stty and gettty. Except this is doing both at once, and set_terminal() doesn't take serial speed. What I could do is add it to set_terminal and have xset_terminal() supply a 0 parameter. The question is who uses set_terminal() directly (and would need to be converted)? Let's see, lib/password.c is (which looks like a bug). And top is, which... I'm not sure about? In ubuntu 14.04, "cat | top" says tty get failed and then hangs with no further output but not exiting either. (Bravo.) Busybox top prints one thing of output (at the wrong screen size, seems to assume 80x25). And current toybox top... gets it right. So I should preserve that. Except that if you ctrl-c to exit it doesn't re-enable the cursor display.

The tcsetattr() plumbing is saving the old settings for stdin and the serial device into TT, which prevents the plumbing from being fully genericized. (The save states would have to go into "toys" instead, lib hasn't got access to any speciic command's TT.) In theory I should be able ot get it to use tty_sigreset() anyway, because A) do we really care about restoring the serial context to what it was before, B) do we really need stdin to be what it was before, or is reseting it to a sane "cooked" state sufficient?

But tty_sigreset() outputs ansi escape sequences (to reset the cursor visibility and such), and that's not right for microcom, where you may legitimately want to "microcom blah > filename" and have the exact output preserved.

August 16, 2018

I was asked to modify the LED driver to create nodes writeable by a specific group. This means the nodes A) need to have a hardwired Group ID, B) need to be mode 775 instead of 755.

This is the same LED driver I fixed platform data for last month, so I have some familiarity with the code, but this driver isn't currently setting either of these parameters, which means they're being inherited from generic code somewhere.

In leds-pca955x.c the function pca955x_probe() gets an argument struct i2c_client *client from ambient plumbing (fully formed from the forehead of zeus), and eventually calling devm_led_classdeev_register(), the first argument to which is client->dev. That winds up in of_led_classdev_register() (in led-class.c), which calls device_create_with_groups()... Hmmm, what does it mean when type is set and class is NULL...

I googled and a random stackoverflow page suggested I basically should do sysfs_get_dirent(kobj->sd, name)->mode |= 0770; (Only on 3 lines with more error checking.) I.E. don't try to get it to supply the right mode at creation time, but basically chmod() it after the fact inside the driver.

At which point I might as well just do this in userspace in an init script. This part of the kernel has ridiculous levels of indirection. It's just _silly_.

August 15, 2018

I've had to bookmark the DMA stuff and rotate back to other stuff, now that I've reached a reasonable stopping point. (As in the controller part works now, and next up I have to tackle the driver side of things which I might as well do later since it's a new can of worms.)

One bit of other stuff is that I found out part of the board's instability is CONFIG_PM, which is marked "experimental" for sh4? Huh. Lots of unrelated stuff is going through PM calls, the problematic part appears to be taking/freeing locks? I can also disable the userspace callbacks, and a bunch of unneeded config symbols... (Yay miniconfig.)

August 14, 2018

Back on the JCI n40 sh7760 sh4 hardware, and the actual physical DMA controller hooked up to DMAEngine is working with dmatest! Woo! (Which means the DMAEngine plumbing is telling the DMA controller to copy from memory to elsewhere in memory, so not hugely exciting, but the hardware is DOING something. Correctly! Driven by newly created platform data telling it how!)

August 10, 2018

I tested qemu from 2.11.0 back to 2.4.0, then 2.3.0 paniced the mainstone kernel before init so I bisected it to qemu git fc1891c74ae1 (June 15, 2015) which fixes a bug supposedly introduced in 2f0d8631b7 (May 13, 2014) except the old version doesn't work for me either.

I don't think this ever worked in qemu? They just said it did? The driver was tested on real mainstone hardware, and QEMU probably only ever _tested_ the PIO codepath. Hmmm. Be nice if there was a human I could ask...

August 9, 2018

Took a break from DMA yesterday and this morning to work on other stuff (note to self: toybox's dhcpc and zcip binaries should be the same darn thing because making the one fall back to the other is hard otherwise), but I'm back poking at it now.

AHA! That conversion commit (halfway genericizing a pxa driver to use DMAEngine) went into v4.4-rc1, so 4.4 has to work if anything ever did, so rolling back to v4.1 and still getting the same failure means the regression is in QEMU.

Ok, what do I know: ethernet card on arm platform is SMC91X with PL330_DMA, and drivers/net/ethernet/smsc/snc91x.[hc] has #ifdef CONFIG_ARCH_PXA. Meanwhile qemu has smc91c111_init() in hw/arm in the files gumstix.c and mainstone.c. And mainstone uses pxa2xx.c, and there are build instructions for that.

But "make ARCH=arm mainstone_defconfig" does not enable CONFIG_DMADEVICES which CONFIG_PXA_DMA is under. So it _can't_ work? So something is wrong here. How do I reproduce the old presumably known working environment before trying to map this forward to current code (and see where the stuff I'm _trying_ to debug broke it)...

August 8, 2018

The smc91x.c DMA code was only ever halfway translated to DMAEngine, the DMA enabling code is still under #ifdef CONFIG_ARCH_PXA today, and uses PXA constants inside those #ifdefs.

What is a PXA? Back around Y2K Intel bought an ARM fork called StrongARM, demonstrated their utter lack of understanding of anything non-x86 for several years, and then sold the corpse to Marvell. PXA was the ARM chipset they put out in the meantime.

(Aside: back when I wrote for The Motley Fool I got to phone interiew an Intel vice president. I remember I taking the call in my office at BoxxTech and asking the guy what they were doing with Arm, and him being surprised at my interest. The answer was basically it wasn't x86 so Intel had an allergic reaction to it because "we do x86" was still Intel's corporate identity. See the "you can't get from stage 2 to stage 3 until you've given up the concept of having a core product/business" part of the three waves stuff, they were _hugely_ stuck on that the way microsoft was stuck on being "the people who do windows" back when they were trying and failing to diversify into msnbc and xbox and so on.)

Anyway, the smc91x driver conversion says it was tested under "mainstone", and that's one of the arm boards qemu emulates. The QEMU mainstone board claims to emulate both the DMA and the ethernet card in question, so let's try to set it up.

August 7, 2018

This sh7760 DMA thing is hard.

So, back before my week in Austin I was trying to get DMAEngine to work on the sh7760.

The smc91x.c ethernet driver has DMA support, but it was initially only for an old ARM chipset (PL330_DMA in PXA chipsets). It was sort of halfway converted to DMAEngine, but the code's still full of #ifdefs for a specific arm board. So I'm reading the dmaengine documentation (and watching youtube presentations about how it's supposed to work). There's two halves of this plumbing: hooking up a DMA controller so you can ask it to do things in a generic manner (a DMAEngine controller driver), and then teaching a device how to use DMAEngine.

Working on the first half at the moment. In theory there's plumbing for this already. It's not currently hooked up on this board. I hope it hasn't bit-rotted too badly...

August 6, 2018

Back at work in Milwaukee. My eyes are really easily tired, still, ever since they got dilated. (And I suppose from a week of Austin pollen. And my glasses are still chipped and scratched and bent and so on. Fade ordered new ones but it's Zenni optical so they come from china in a container ship. Takes a while.)

August 3, 2018

Eye doctor appointment today, got my eyes dilated and my retinas photographed. She didn't see macualar degeneration, but the blind spot in my left eye _is_ three times the size of my right eye (something I noticed a couple years ago), because something something extreme myopia something something prescription something something optic nerve. She wants to see me again in 6 months.

Dilation made catching up on the toybox patches I didn't see due to spam filtering a bit more difficult.

August 2, 2018

The reason I didn't see Elliott's message on the mailing list is because gmail's decided all emails from him go into the spam filter. I'm calling that an own goal on Google's part.

And it's not one message I missed, it's several messages going back over a week. Lovely.

July 30, 2018

Blah, end of the month is the Ohio LinuxFest call for papers deadline. I could submit something. It's not far from Milwaukee.

I suspect I'm not going to, though.

July 30, 2018

Blah, end of the month is the Ohio LinuxFest call for papers deadline. I could submit something. It's not far from Milwaukee.

I suspect I'm not going to, though.

July 29, 2018

I've been doing a cleanup pass on the deflate code from toys/pending/compress.c with the intent of moving it to lib/deflate.c. This is hard because it uses TT and toybuf and command flags and such, which were fine in a command but not in lib.

For things like "tar" I could just ahve everything shell out to gzip/gunzip and use it as a pipe filter, but not for zip/unzip, and if I get around to doing a read-only git clone that needs zlib compression (which is deflate with a different header and crc algorithm), plus rsync -z...

So I'm converting it to lib/deflate.c, but my only current user doesn't provide a good model for what that API should look like. I should probably tackle "zip" next.

July 28, 2018

Back in Austin! Yesterday was finishing stuff at work (well, failing to) and travel, bit of a blur. (Fade's back from her 6 weeks in Rome, so we're back home with Fuzzy and the cats. And the dog, which Fuzzy's been taken care of but Fade is reacquiring custody of.)

Toybox! Been basically ignoring it for a week because tired. What's accumulated...

July 26, 2018

Blah. Went off caffeine last friday and I've done nothing but work since. I get home and go to sleep and wake up to get ready for work. Probably good for me in the long run...

On the way home yesterday I went to the doctor about weirdly persistent short bursts of pain on my left side (only lasting a second or so, but 3-4 times and day and always in the same place for the past week). He ordered a urine test, and it says "kidney stone". So I have that to look forward to. (I wouldn't think that could seem like it was at the bottom edge of one's rib cage, but I guess kidneys are weird.) And now I need to find a "primary beverage" _other_ than the cheap "60 cents a quart" gatorade knockoffs, since that's probably what's doing it. (All those mineral salts have to go somewhere...)

I've spent this week trying to add DMA support to the sm91x driver on an sh7760 board. In theory smc91x supports DMAEngine and sh7760 has a DMA controller. In _practice_ smc91x.c and smc91x.h have #ifdef ARM around all their DMA support for historical reasons, and even when I enable the sh4 on-chip dma controller DMA Engine driver (there is one, CONFIG_SH_DMAE), it only ever calls the _init function and never calls _probe, and tracing through it looks like it wants platform data describing the DMA controller. (Or a board converted to device tree.)

I don't know what such platform data should look like, but I found arch/mips/jz4740/platform.c setting up a DMA controller via platform data, so maybe I can crib from that?

For extra fun, the smc91x.c fallback code when DMA _doesn't_ happen appears to be performing the PIO in interrupt context, not even handing it off to a tasklet. When it can't set up the DMA, it does a for loop right then and there in the "else" case. No wonder the board's unhappy. (That's generic code for you: we're doign DMA, everything should support DMA, the PIO case is just a fallback nobody should be using, don't bother optimizing it...)

July 23, 2018

The big project I've been working on at $DAYJOB, since the start of the year, boils down to porting an sh7760 embedded system with 128 megs of ram and 128 megs of NOR flash from Windows CE to Linux.

The new system is buildroot (sadly with glibc) running a giant application binary (a .NET thing which other people got running on mono). I started out with general plumbing (things like writing a boot executable busybox init can call during statup to read bootloader variables and configure the network and such based on what they say), but the box needs to handle some realtime tasks (mostly to do with responding to serial data within some number of milliseconds), and a lot of what I've done has been helping squeeze latency issues out of the base OS to help the app hit its realtime performance goals.

In theory only certain threads need to be realtime, and those are running native code (from shared libraries). Getting them to respond reliably by deadline has involved identifying and removing a lot of different source of latency.

Initially they thought "run the linux-rt patch and adjust the priorities", but the realtime patches actually made things worse and raising priorities starved kernel threads.

There was of course some driver work: mostly somebody else's job although I dug through it to help them figure out what needed changing; the serial FIFO is enabled now and the tty layer remains creepy (notifying the tty char device that new data is available so poll() returns requires taking a lock, so can't be done from the serial interrupt context, it has to schedule a tasklet, and that tasklet is lower priority than "realtime" threads.)

But a lot of what's needed adjusting is userspace. I mentioned the switch from fork() to vfork. We also had to mlock() a lot of stuff (we don't have enough memory for mlockall(), the OOM killer immediately triggers, so we're parsing /proc/self/maps and replacing malloc() with global variables so we don't have to lock the heap).

Now we've reached the point where the ethernet card's lack of DMA support is a big bottleneck. I remember my 75 mhz 486dx reading 300k/second from its ATA hard drive when in PIO mode, and doing 10 megabytes/second once I got DMA enabled. Here a 200 mhz CPU with a 67mhz external bus is attempting to copy 11 megabytes/second from a 16 bit ethernet port via PIO. Needless to say the CPU is pegged at 100% without getting full speed even when we're not adding ssl or gzip to the protocol.

As an aside, I once benchmarked 10baseT at 1.1 megabytes/sec, 100baseT at 11 megabytes/second, and last I checked gigabit ethernet was maybe 80 megs/sec on a good day with a tailwind, although they've probably fixed it closer to its theoretical ~110 m/s now. (I don't own a hard drive that fast so it doesn't come up much, gigabit is "plenty".) There's been 10gigE hardware out in the wild for a decade now, and mostly nobody's cared.

With mp4 compression DVD-quality video is somewhere around 150k/second and even an HDTV stream pretty much fits in 10baseT. What people really use the faster speeds for is copying large files and mounting network filesystems.

The jump from USB 1.1's theoretical max of 1.5m/sec to USB 2's 60m/sec to USB3's 640m/s is significant because we're plugging in hard drives. Filling up a 64 gig USB stick at 60m/sec would take around 20 minutes, and filling a terabyte disk through USB 2 is well over 4 hours. Network resources we usually just stream the data we're interested in, and 100baseT is plenty fast to access a user's stuff or even a small number of people sharing. Gigabit comes in when you connect a floor of a building. 10GigE comes in when you connect entire buildings to each other, which isn't something end users do much.

Anyway, this box is trying to keep up with 100baseT and failing, so it needs DMA.

July 21, 2018

Caffeine deprivation. Long naps today.

I need to redo route, I need to do a proper toysh. These are the blockers for removing busybox from mkroot, which is the blocker for integrating mkroot into the toybox repo.

Thalheim got me a login to a fast machine running alpine linux, but attempting to build musl-cross-make failed because the patches it's applying require "fuzz", which busybox patch doesn't have because they're using an old fork of toybox patch, and that doesn't have fuzz support.

So I need to add fuzz to toybox patch. It's on the todo list...

But what have I been doing today? I want a short output mode in ifconfig that shows me:

  lo      00:00:00:00:00:00 ::127
  eth0    12:34:56:78:9A:BC

And when poking at toybox ifconfig.c to add that, I did cleanups to the existing code. (Yes, the code I use as the example of how many cleanup passes I can do on a command! It's never done, it's only shipped.)

July 20, 2018

Sigh. My eyes hae been weird forever. Back in high school, I had some sort of funky double vision where the large fuzzy circles bright lights did with my glasses off had a dome on the top, and when looking at things like an LED clock there'd be a ghost image above it. That turned out to be the glasses I was wearing or some such? In any case it went away again eventually. Most things seem to, but I'm getting older...

My right eye's been a little different from my left for years. They were identical growing up, but I noticed the colors were very slightly more green in my right eye and red in my left eye while working at WebOffice (good grief, 17 years ago now). It's barely perceptible, but it's there. Hasn't gotten any worse since I noticed it, though.

Now I've noticed that vertical lines in my right eye are bowed very slightly to the right. It's again barely noticeable. I googled and they said "macular degeneration". (Never google a medical symptom.) I'm _hoping_ it's another "your glasses are terrible and the effect persists a while after you take them off" thing...

My right eye's been the one that's had stabbing pains behind it, and feels like something's been pushing against it, for 10 years now. Which I got multiple cat scans and MRIs about (and none of the places I got them from could give me a copy of my records years later, they hadn't retained them, what wonderful US medicine)...

I poured my energy drink down the sink anyway. Going cold turkey off them. (I already know they're not good for my vision.)

Don't expect a productive week.

July 18, 2018

I've been redoing the ps help text so "ps -o help" is generated from a new field in the typos[] array (which is the "-o TYPE" data, yes my names tend to be horrible during development but I try to clean them up before checking stuff in, remind me to replace the "DREAD" help text to say "Disk Read" instead of "Pirate Roberts" before committing this).

Copying the data from the old manually formatted text to the new automatically generated columns is problematic. Some fields are a full line, or even multiple lines.

Fun formatting bits where I cheated with the manual layout because the left column had 5 spaces for field names and the right column had 7 spaces, and the left description had 33 spaces and the right has 29... and of course the "command line -o fields" section doesn't line up with the "process attribute -o fields" section at all, it just LOOKS like it does at first glance.

But the real problem is I have fields that it knows how to display, but aren't in the help text. (Many of these are in the default output of top and iotop.) And the thing is, I don't necessarily remember quite what they're displaying at this point either, it's been a while. And sometimes the kernel itself is unclear or has serious version skew. This is a perennial problem.

For example, "RSS" is displaying SLOT_rss and "RES" is displaying SLOT_rss2. Meaning I have SLOT_rss and SLOT_rss2 and I've forgotten the difference between them. I traced through the code to see that rss is read from /proc/$PID/stat (as described in table 1-4 of Documentation/filesystems/proc.txt) and rss2 is read from /proc/$PID/statm which means the kernel is returning "rss" from two different sources, and I retained them because they DON'T MATCH. (Thank you kernel.)

I vaguely recall rss2 happened because of either top or iotop, which needed the data in the statm version, not the stat one. But what was the difference? Might be time to read the kernel source again if I didn't leave myself a note in the source, on the mailing list, or in this blog... (Or twitter, but that's mostly cries of pain/frustration.)

Sigh. if I google "what is the difference between rss in stat and statm" and wind up getting pointed to something I wrote a year or two back I'll be annoyed.

July 17, 2018

Alright, last day of this. But I'm almost caught up! (Who knew it would be hard to compress over 10 years of my life into a concise narrative?)

When I returned to IBM in the second half of 1998, I sprayed the whole place down with Linux.

What happened was, I wound up back on the 3rd floor of building 903, 4 doors down from my old office... and bored out of my skull. I was there to provide support for big OS/2 customers like Ford's assembly lines, in _case_ they had a problem. There was no new development planned on OS/2, and the existing OS/2 department was largely trying to find other jobs in the company like I'd done with JavaOS a year earlier. (You may notice a theme here of me being 6 months to a year ahead of my coworkers. They were just then seriously looking to leave.)

I'd left IBM in late 1997 because I was all excited about Java and couldn't get IBM to let me do java, and during my absence IBM had taken on Java as a religion and wanted to do Java everywhere... but I was rapidly cooling on Java because Java 1.2 came out in December 1998 and it redid the GUI based on a horrible model/view/controller monstrosity called "swing" that was a _bad_idea_. Java 1.1 had the lightweight AWT that was elegant, there were some holes (like the lack of truncate() I'd pointed out) but 1.2 turned the GUI into a bloated nightmare and introduced "deprecation" meaning they didn't plan to be serious about backwards compatibility like C. For me Java peaked somewhere between 1.1 and 1.2, I did _not_ like the direction the langauge was going in, and Sun's refusal to provide a JDK for Linux had revealed that they didn't want to destroy microsoft's monopoly, they wanted to capture it intact. Nobody wanted to substitute microsoft's leash for Sun's noose.

Sun was very clearly threatened by Linux, but the Linux community had written Sun off as too dumb to live after this post from a sun engineer, and the too dumb to live part was in reply to a long technical explanation of how Linux was better than Solaris which also explained why Sun had invented threading. Threading happened because Solaris' process switching was utterly terrible. Linux's process switching was faster than Sun's _thread_ switching, and Java's refusal to provide things like poll() or select() and instead make you spawn threads just so they could block waiting for I/O was _silly_. (Linux would shortly have its own performance issue trying to replicate sun's model, but they also rapidly _fixed_ it. "It never occurred to us to optimize for such a stupid use case. We've now done so. Next.") By the time Sun screwed over Blackdown, The Linux community as a whole wrote Java off as a bad idea that might need legacy support the same way Cobol did, but nobody sane should write anything _new_ in it.

Linux famously grew 212% in 1998, that was all the Java developers switching over. Netscape had collected the "anything but microsoft" crowd under a single flag, and then poured them into Linux by elevating Linux to a Tier 1 platform and pointing to Linux as the model that convinced them to release the netscape source. You still needed an OS to run Java under, and the Java userbase switched en masse to Linux and patiently waited for proper Java support from Sun, learning native Linux development in the meantime... and then Sun's bad behavior convinced them to stop waiting and stick with native Linux development.

I was about 6 months ahead of this curve because I'd been keeping tabs on Linux since the SLS disks, and had already been looking to move _off_ of OS/2 with Linux as the obvious next step (because "it's not going away" was a unique value proposition among non-microsoft operating systems, at least before Steve Jobs returned to Apple). My main problem was that Linux had never been able to make XFree86 work with any graphics hardware I could actually obtain, and even installing the OS/2 port of XFree86 to tinker with it hadn't solved the problem for me. Linux was fine, XFree86 was terrible. And after OS/2, I wasn't going back to a text mode only OS.

So I bought some new hardware and made another attempt at putting together a Linux system. I didn't have a CD burner at Quest or at home, so I downloaded the 25 debian floppies and tried to install them, but my serial port was at a nonstandard IRQ and I didn't know I needed to use the "setserial" command to tell Linux (hard to pull up a man page when you don't know the name of the command)...

And here's where I asked that question on the list, which marked me finally getting a usable Linux system installed at home, albeit in a slightly less obvious way than the question implies. After posting, I read all the list traffic, including a message from somebody who had an easier time getting Red Hat installed, a reply to my post from someone who had the same problem but no solution, and then I hit this message replying to someone comparing Debian and Red Hat, where the Debian user said he was "really grateful for not having people like that trying Debian" and that "Debian is not the right distribution for them" and I went "Ah-ha, there's my problem! I shouldn't be using Debian! It's full of assholes like that guy. He says he's glad for everybody like me to go use Red Hat and leave his thing to die, I should take this advice and let network effects do their work!"

So I downloaded the Red Hat install disk, figured out how to get setserial to teach the kernel to use my modem (either that or it probed the sucker correctly, there was a kernel config entry for that way back when that Debian didn't enable because it was "dangerous"; it poked the card to generate an interrupt and assumed the next interrupt was the card's interrupt, in theory a spurious other interrupt could come in but in reality it worked fine). And I was a happy Red Hat user for years after that (happy until the inexcusable introduction of "kgcc", and continued to use it via inertia through Red Hat 9, then Fedora Core 1 dropped support for the processor my machine was still using because they built the kernel "optimized" for a newer chip and it wouldn't boot on my machine. At which point I switched to Knoppix, and from there to Ubuntu. I refused to touch Debian without thick gloves between me and it until quite recently, all because of one asshole at a formative moment...)

This poking at Debian and Red Hat was the context in which I returned to IBM in the second half of 1998, and sat in my office bored... so I tried to install a Linux partition on my workstation so I could poke at Linux at work as well as at home. Except that Red Hat's network install wouldn't work through the socks/proxy firewall, but the department at IBM had a CD burner and a stack of blank CDs in the supply closet and I remembered Debian's download directory had contained ISOs, so I downloaded the Debian ISO image and burned it to a CD and... found out IBM had given all the leftover unsellable Micro-channel bus PS/2 systems to employees for desktop machines, and Debian didn't support micro-channel yet. (Red Hat said it did, but Red Hat had no ISO downloads (to encourage CD sales) and couldn't install through IBM's firewall...)

So I had a useless Debian CD on my desk and mentioned it to a coworker in the usual office chit-chat, and the coworker said he had a PCI bus machine (because the micro-channel ones were old and creaky by that point, and being replaced), so I gave him the Debian CD (legacy AIX knowledge was common in the department so a PC unix was of interest), and then other co-workers wanted debian CDs so I burned more and passed them around the department...

So I got my bored co-workers excited about Linux, but I couldn't get an official IBM position on Linux. I cornered an executive in an elevator and asked him what IBM's plans for Linux were, and he said the lawyers were uncomfortable with the GPL and until they signed off on it, IBM wouldn't touch it. So I let my contract expire at the end of the 6 months, and went to work somewhere else.

The next year all those old OS/2 guys founded the "IBM Linux Technology center" (and eventually got a bigger space under the cafeteria in the building next door), and when Lou Gerstner left he gave his successor Sam Palmisano a todo list starting with "spend a billion dollars per year on Linux for the next 5 years"...

The first time I left IBM, it was because I couldn't do Java. Then they got Java as a religion. The second time I left IBM, it was because I couldn't do Linux. Then they got Linux as a religion. I spent the first half of the 2000s thinking I should have stayed at IBM and waited for them to catch up with me...

A year and change later I attended the disaster that was Kansas City Linuxfest (which was _so_ bad several of us huddled around a table in the debris and talked about how to run a convention right, and I took notes, and a couple years later that became Penguicon). In the aftermath of KC LinuxFest there were boxes and boxes of the June 2000 issue of Linux Journal which they'd meant to give away as schwag at their booth, but there were like 1/10th the attendees as expected, so they were piled on a pallet to ship back to Seattle, and I grabbed a couple boxes and drove them home and left them in the entryway of the IBM cafeteria with "Free: Take One" written on the flap. That was the "who's who in Linux" issue with interviews with the top 50(?) contributors, including lots of important people like Pauline Middelink (inventor of network address translation) who you don't hear from today, because driving brilliant pioneering women away from the boys club via endless harassment is how a lot of white dudes keep themselves employed. (They can't compete, so they harass...)

I've poked at Linux as a hobby ever since. I taught more night courses at ACC around this time ("intro to unix" passing on the Linux stuff I was learning, and "intro to operating systems" which was a survey covering mainframes and such) I really enjoyed teaching, but after a break when I looked into it again the paperwork requirements had grown beyond my interest, and teaching as adjunct faculty paid less than writing for the Motley Fool did. And I wrote about Linux several times for the Motley Fool (another "for fun" thing, although for a year or so it turned into a half-time telecommuting position that let me take time off from programming while my mother was sick). Writing or The Fool tapered off at the end of 2000 when the dot-com bust happened, that's its own story, ask me about that sometime if you're curious...

I'm not sure if "coming back to babysit Feature Install" was my first Bullshit Job, since I did in fact have the skills to do it and was ready to if the need had arisen. But if not the next one on my resume, Trilogy, definitely was. Trilogy was more money than 27 year old me had _ever earned ($50/hour! About $75 adjusted for inflation.) doing a crazy java thing which was _also_ IBM politics. Two divisions of IBM had fought over a project until management took it away and outsourced it to stop the fighting. Of course Trilogy had sold them a useless home-grown Java framework as part of the deal that I was nominally there to work on, but I don't think I ever added a link of code to it that mattered? The real job was coordinating work done by competing bits of IBM in diferent cities, which Trilogy was not in a position to do. (But we were at least considered impartial, and thus distained by both sides.) So I spent all my time on phone calls getting bits of IBM to talk to other bits of IBM, and eventually figured out that what had screwed up the project was a specific manager in Dallas who had figured out how to advance his career by screwing up projects. He'd take a project still in the planning stage and go "why isn't this being implemented?" (Because the design's not done?) But he'd make noise and implementation would start with an unfinished design. Then he'd go "Why haven't we started testing?" (Because it's still being implemented?) And enough noise later testing of the unfinished thing would start. The result was a nightmare, but "Mike gets things done!" This wasn't happening before he acted, and now it's happening! The fact it _shouldn't_ be happening and is heading for a cliff wasn't to become apparent until he was promoted away.

The project outsourced to Trilogy was some sort of billing system for IBM mainframe customers, which some IBMers confessed to me _had_ no fixed prices for anything because the sales process was entirely about figuring out what the customer would pay and charging them every penny of it. My main recreational activity on the job was fishing through the bug database for a request from IBM to do a thing and a request from IBM to undo that thing (or do the opposite), pair them up, and bring them up in the daily conference calls. The project got so bad that the IBM WorldWide Integration and Test facility in Australia signed off that it had "passed" (some random version of the constantly changing spec and some random version of the constantly changing code matched some random version of the constantly changing test profile, or at least they caimed it did) and closed out their budget and said "sorry, we're done, we can't work on this anymore". Of course there were pending design changes (of a fundamental "it should work _this_ way" type) still being entered into the defect tracker, but it was brownian motion rather than progress. The test department washed their hands of it so no further progress could be made, and that was just one political crisis du jour. I was paid to be on conference calls with France and Boblingen Germany, WWIT in Australia, and Poughkipse New York. Note the three wildly different time zones? 6am call, 8pm call, or 3am call (Austin time) depending on who had to be on the phone. I would sometimes have four of them in a row (all three for the day and another in the morning), and was encouraged by my boss to sleep under my desk. It was one of those dot-com companies with a free food room, and not cheap stuff either: roast beef and powerbars and so on.

At the end of the 6 months I turned down a 50% raise ($75/hour! In 1999 money! For a 27 year old!) to continue, because I was developing stress-related health symptoms. (And my mother was dying of cancer, but that's another story...)

Trilogy taught me to pursue fun work rather than the most lucrative thing I could currently be doing. At my next job (Boxxech) I tried out a management role (and hated it), then took a huge pay cut to join a startup I found at the local Linux User's Group meeting, because I wanted to do Linux full-time. That was WebOffice, which I've already written about, and which led to Aboriginal Linux and mkroot and busybox and toybox. There, all caught up.

July 16, 2018

Day 3, still working towards my switch to Linux, we're up to about the start of 1997.

OS/2 4's two selling points when it shipped were Java and voice recognition (which didn't have an easter egg for "Tea, Earl Grey, Hot", and I'm still sad about that). OS/2 was crammed to the gills with Java, which was uniting the "anything but microsoft" crowd. (Back when I still got along with Eric Raymond I put this part of the history in the art of unix programming, which I added so much material to he almost made me a co-author. It's a pity he ossified into a loon. What is it with old unix guys? When I first met Eric, he said back in the 1980's Richard Stallman was sane and he and Eric used to be friends, but Richard got progressively crazier as the years went on. Eric was worried he'd go crazy the same way as he got older, but instead he found a whole new axis of crazy to go down. I am sad. Anyway...)

I was intrigued by Java because my old "platform independent binary excutable code" (PIBEC is not a useful acronym) project was my proposed graduate project if I'd gone to grad school instead of trying to pay down my student loans immediately. (It's also what got me into compiler development, I first read large chunks of the GCC source code trying to figure out how to add a new "backend" generating my bytecode. I'd also need to have two kinds of function pointers, one for native code and one for bytecode, but I never got that far. I also tracked down a copy of Small-C and read that when gcc circa 1993 proved intractable, my later participation in tinycc was probably inevitable.)

But I hadn't been able to go to Hursley (where IBM's JDK team worked) due to the Austin site consolidation, and Sun didn't get back to me in time before I'd committed to move to Austin, so at the end of 1996 when Warp 4 finally shipped (a year too late to matter) I finally did transfer to IBM's JavaOS team... as a tester. (There were no developer seats open, but I wanted in and I can test. And it meant I was doing Java. Now instead of working on the 3rd floor of building 903, I was on the 6th floor. Oooh.)

I thought JavaOS meant "port the JRE to DOS using expanded memory and green threads with an SVGA AWT". That was the obvious approach to me: boot from a floppy, run well in 1 meg of ram, it wouldn't do SMP but you can do upgraded versions later, NCSA telnet was providing an IPv4 stack for DOS back in 1986. This would handle all those kiosk use cases they were talking about with ease and could be scaled up in later releases.

But IBM was porting Sun's JavaOS to PowerPC, and what Sun did was take the Solaris kernel (there is no recovering from this point in the sentence, we're already doomed) and run the JDK as the init task on PID 1. Solaris had no device drivers for non-Sun hardware, so they allowed userspace device drivers to be written in Java, meaning the OS was calling out to Java to interact with the hardware. Of course they didn't hava device drivers ready in java either, for esoteric things like IDE hard drives, so they had to dhcp/bootp (which intel later renamed PXE boot but this was 1997 so Intel hadn't taken credit for it yet) to bring the system up, and ran from a 32 megabyte ramdisk, with another 32 megs for Solaris, or a total of 64 megabytes of memory _IN_1997_. This was an INSANE amount of memory at the time. OS/2 could run in 4, run comfortably in 8, and 16 megabytes was posh, and OS/2's biggest criticism is it was a memory hog. It quickly became clear to me that this was another powerpc tangent due to IBM's hardware side dragging the software side away from anything any customer would ever want.

This is also the system where the animated screensaver paused noticeably every few seconds to run the garbage collector, which got me thinking about improvements. But at the time what this meant was on top of everything else, the desktop is not reliably interactive.

This time the crippled powerpc Boondoggle IBM was doing got cross compiled from PowerPC AIX machines, so I was learning some new stuff, but I wasn't really getting a lot of Java coding in. (It was during this period I wrote a deflate implementation in Java, in the evenings on my home machine. If work wasn't going to teach me java, I'd do it myself...) The JavaOS work wasn't as stressful as the OS/2 development had been, but it seemed like such a waste of time. And as a tester, I wasn't allowed to fix anything that was wrong with it. And they didn't _need_ testers to know that 64 megs of ram with no hardware drivers and a glitchy desktop was not a recipe for success. I worked on JavaOS for 6 months, and then I too looked outside IBM.

I answered a classified ad for a Java developer, nailed the phone interview, and got a job at Quest Multimedia writing 45 "interactive diagrams" for the CD in the back of a McGraw-Hill math textbook. I'd been at IBM long enough I didn't have to pay back my relocation money, so I took the plunge.

Working at my first start-up was fun, but still fairly exhausting. Jere Confrey (who is apparently at NC State University now) and her husband Alan Maloney were professors at UT who were good at navigating grant money bureacracy, but were sick of 2/3 of it immediately vanishing when they ran it through their university (charging them exorbitant rent for office space and $50 per folding chair and so on; the university viewed such grant money as supporting the university more than any specific professor or project). So they set up their own little company to run the grant money through, and got their office space in a strip mall off of 183 and Anderson Mill (northwest of town).

They had a textbook contract that involved writing precalculus tutorial exercises as Java applets. Jere would come up with an idea, Alan would do a macintosh "hypercard stack" with a sort of storyboard, print it out on paper (usually 5-10 pages, mostly fake screenshots), and I'd implement it. They had a graphic artist to feed me backgrounds and gifs to animate (and to make the web page each applet would embed into), and the four of us were the entire company.

I worked there for 6 months, and am still proud of a lot of the work I did. I made an auto-resizing graph (figured out where the graph lines should be in the X and Y range you told it to display, trying 1, 2, 3, and 5 until it got between 7 and 10 lines) which you could feed a string equation to and it would plot it on the graph, and if the equation was an inequality it would fill the appropriate side of the graph (with a dotted or solid line depending on whether it was > or >=), and for one of them I even colored how many layers of overlapping inequality you had. And of course moving a point along the curve... Under the covers I wrote a function that would parse a string and perform the equation it said (with parentheses and correct priorities and everything, pushing and popping operator and operand stacks), and the way it plotted curves is I'd stick an X in there as a variable and then string.replaceAll() to substitute in the number and repeat that for each point I wanted to plot, and it was fast enough on a 486dx75 to do at least 4 or 5 frames per second even on the complicated ones. (The trick is never allocate objects after startup, which is expensive because it memsets their contents to zero, and also means the garbage collector never runs.)

Another fun thing about working for professors is that when I decided I wanted to teach night courses at the local community college, and write for an online publication (the Motley Fool), they were supportive. (IBM made it clear that full-time employees were serfs and they owned ALL YOUR TIME, so anything you did outside of work belonged to them or was somehow stolen from them. Quest didn't care as long as I got the work done, and with two professors running the place "busy writing and teaching when not doing work for you" made them feel right at home, like I was a grad student.) I started by teaching an "Intro to Java" course at the ACC campus closest to work, because nobody knew much about it then and my year and change of poking at it, plus the day job doing it, made me an expert.

The desktop system Quest bought me took a while to set up. I wanted to run Linux on it, but "there's no JDK for Linux" was the #1 bug on Sun's "Java Developer's Connection". The main page of, where all the java documentation lived, showed you the top 5 bugs, and if you logged in you could vote for which bug they should fix next. This bug had more votes than the next 4 bugs combined, and it stayed that way for 11 months... until they changed the page to not show any bugs. So my desktop had a Linux partition on it, and I played with Linux on my home system Xfree86 on my Western Digital graphics card was still all sparkly and vertical tearing because they updated the screen while it was drawing rather than waiting for the vertical retrace interrupt. On the new SiS motherboard that Quest got me, the framebuffer driver had an endianness issue so every 4 bytes were reversed, and the software drawn mouse cursor read the framebuffer data in one endianness and wrote it in the other, so moving the mouse cursor across the screen left a smear. And of course finding the XFree86 devs to talk to them was basically impossible, and nobody could get access to their source control without being an "approved committer", so I could never get any of it fixed unless I tracked it down myself and submitted a patch to the distribution maintainer. (Maybe they could get it upstream into the package? Not my problem at that point.)

So I put an OS/2 partition on the Quest machine, and did all my Java development on that. And then they'd test it on their MacOS 7 browser (both netscape and internet explorer for MacOS 7), and test it on a Windows 95 IE machine (which had _different_ bugs than IE for MacOS, I remember IE got nested lightweight component positions wrong, instead of traversing down each canvas and adding up the offset of each parent component to figure out where to place the upper left corner of this one, it would multiply your component's offset in this canvas by the number of parents it had. Took a while to figure that out because it was usually off the bottom right of the screen. Worked fine on everyting else, but not on IE. This meant I couldn't use nested canvases but had to position every component manually in a single canvas, because microsoft. There were so many other bugs like that, Java really earned the "write once, debug everywhere" moniker.

But I wanted very much to add Linux to the pile, because JavaOS clearly wasn't it. There was an open source thing called Jos people were playing with that I hoped might turn into a thing, but I'd been following it most of a year and it had already stalled by that point. (This is where I learned that talk begets talk and code begets code: they'd started with design discussions instead of a prototype. Their discussion quickly expanded to the point where no intiial implementation could do the vast design justice, and there wasn't a cannonical implementation for everybody to attach their work to or coordinate development via... A year in they had a page of desktop graphics and that was it. I used Jos as an example in my Prototype and the Fan Club talk.)

But it wasn't clear what Linux I should install. The SLS disks I'd tried years earlier were long gone. I remembered mention of "debian" and found that but their page pointed me to the FSF which had a don't call it Linux, it's GNU GNU GNU we take credit for EVERYTHING ALL HAIL STALLMAN! screed, which was historicaly inaccurate but I didn't know better at the time and this led to a certain amount of embarassment. And in any case: no JDK. I couldn't build and test the Java applets I was writing on Linux.

A couple months later (Feb 1998) Netscape announced it was releasing its source code and elevating Linux to a Tier 1 platform, and I went "oh good, everything should get fixed now" and expected Linux to get proper third party support from places like Sun. Instead Sun recoiled and hissed at the threat. Remind me to write up the "Sun Civil War" someday.

And I still couldn't manage make a Java development workstation out of Linux at any point during my time at Quest, so I did all my Java work in OS/2, which I viewed as increasingly dead. (For example hard drives were coming out that were too big for it to format, ala the ATA-1 128/137 gigabyte limit, with no fix in sight at the time...)

This is also where I got my first laptop, I mail-ordered a used IBM "butterfly keyboard" laptop and installed OS/2 on it to do Java development. I loved that machine. Sadly when I moved out of Austin in 1999 I leaned the laptop bag against the side of my car while packing, forgot it was there, and ran over it backing out). But I replaced it with another laptop, and have used laptops instead of desktops as my primary development system ever since.

But mostly, while I was there, I wrote java GUI programs. Lots and lots. I wrote a 4x4x4 3D tic-tac toe game that played against you with varying difficulty levels (the trick to beating it on the highest difficulty level was it would try to extend its longest line or block your longest line with the same weighting, but you could fork it by making structures you could complete along more than one axis...) I had to relearn _so_ much trigonometry to do the "point on a circle you drag around with the mouse" stuff that showed you the X and Y coordinates and the angle and you could set any of them by highlighting and typing into the appropriate display field and it would move the others and the point for you... Plus you could set its rotation in radians per second and it would move it for you...

Anyway, I _loved_ that job. The money wasn't great (they'd matched my IBM salary, which was better than flipping burgers but not twice what flipping burgers paid; add in student loan debt and a car payment and there wasn't much left each month). But after about 6 months I got a sinus infection and decided I really _needed_ health insurance, and oddly enough IBM was advertising my old position doing Feature Install for OS/2, and I went "I am literally more qualified for that than anyone else on earth". And as a contractor they'd have to pay me by the hour (at a better rate than I'd been earning as an employee), and either they wouldn't work me 90 hours/week or they would PAY ME FOR ALL THOSE HOURS. Either way, I'd come out ahead.

And that's when I got serious about Linux.

July 15, 2018

Continuing from yesterday, and trying to work towards when I finally switched to Linux as my desktop system.

In 1995 I graduated and took a job at IBM, moving to Boca Raton to work on IBM's OS/2 port to the PowerPC. It was the place the PC had been invented, Lou Gerstner had brought the company back from the dead and was running it _very_ well, and I was using OS/2 as my desktop system which seemed to have an actual chance in the marketplace that I wanted to help along in any way I could.

This was a learning experience. I worked on the last 4 months of "OS/2 for the Power PC", and basically watched it die, then got moved to Austin and worked 90 hours a week for a year on something that was already too late.

OS/2 for the PowerPC made sense when they started the project, but not when I worked on it. The powerPC was one of the late 80's explosion of RISC systems, because CISC was clearly on its last legs. I blathered about Risc vs Cisc long ago, but the point is everybody knew a RISC system would unseat CISC and thus x86, they just didn't know which one. And then x86 redesigned itself to be RISC under the covers and took the wind out of the sails of mips, sparc, powerpc, alpha, and so on because it could run the same software at the higher speeds. (Backwards compatiblity means you carry along your existing customer base to the new iteration, and snowball to dominance via network effects. People thought a massive performance advantage would overcome that, but didn't realize Intel could capture that performance advantage while retaining binary compatibility by sticking an instruction translation pipeline on the front of its chip that re-wrote CISC instructions into RISC instructions on the fly, I.E. the Pentium.)

The Pentium was introduced in 1993, doing the under-the-covers RISC stuff. So by 1995 the PowerPC hardware IBM was porting OS/2 onto, which was supposed to outperform everything, ran slower than then-current x86 chips. The effort was clearly doomed, but IBM had spent years on it and were going to finish it rather than cancel it and have to explain the write-off to their investors. Unfortunately sucking away years of development from OS/2 at a fairly crucial time, to work on some cul-de-sac that had been politically important to IBM's hardware side but not important to any customers, essentially doomed OS/2 by giving Windows time to entrench itself. (Although what _really_ doomed OS/2 was that when they _did_ start to have decent retail uptake, they plugged it into the standard IBM tech support phone system that cost them $35 to field each tech support call on a box of software that retailed for $49.95. Two calls to the 1-800 number and they lost money. IBM upper management _did_ actively sabotage OS/2 starting around 1993, and that's why.)

Anyway, we got OS/2 for the powerpc running by the end of 1995. (Even though it couldn't natively compile itself, everything was cross compiled from x86 OS/2 via a special watcom cross compiler; I showed them EMX, the GCC port to OS/2 I'd been using as a hobbyist for years, but they weren't interested because it wasn't "professional".) OS/2 for the PowerPC "shipped to a shelf", meaning you could _technically_ order it but only by knowing its catalog number, which they never told anybody, because if anybody HAD bought it they'd have had to train and staff a tech support line which would be exorbitantly expensive and they were desperate not to.

As we were finishing up OS/2 for the PowerPC, IBM announced a site consolidation: the Boca Raton facility where the PC had been created was closing down and being sold (it eventually became a retirement community), and we could either leave IBM or move to Austin Texas. They tried REALLY hard to get everyone to move, taking us on what I called the "bribe trip" to see "Austin as you will never be able to afford seeing it again" (boats on the lake, ranches way out in the boonies, parties on the top floor of a skyscraper!), and paying large relocation bonuses (I used mine as the down payment on my first condo). And it _also_ meant we couldn't go anywhere _else_ within IBM, it was Austin or quit. (Which meant I couldn't go to the Hursley England site where all the Java development happened. I applied for a job at Sun Microsystems, but by the time they called back I'd already signed the "yes I will go to Austin" contract and was too young to stand up and break it. The contract said I had to give back the relocation money if I didn't stay a year; lots of my co-workers were locked in for 2 years, although they got more people than they expected to move and paid people to retire early on _top_ of the relocation bonus. My team lead Pete Rodriguez (who is ungooglable because there's like 30 of him) spent the last week at the company a year or so later staring at the ceiling and laughing every few minutes, he apparently got a _good_ deal...)

I arrived in Austin in February 1996 to work on OS/2 Warp 4.0, the x86 release we _should_ have all been working on at least 2 years earlier. There were all sorts of fundamentally political compromises in the code, such as the fact the main filesystem driver (HPFS, "High Performance Filesystem") was 16-bit 286 code and thus ran slowly. Why? Because the 32-bit version was full of Microsoft copyrights and cost them extra (royalties) to deploy. But because a 32 bit version already existed, they wouldn't spend money developing a _new_ 32 bit version that didn't have any microsoft copyrights in it and which could be part of the base OS rather than an extra expensive add-on nobody ever bought. (IBM had a mindset that code that cost money to build was worth that money, and thus removing code was WASTING MONEY. It was insane. We did our best to work around it.)

I'd worked on something called Feature Install on the PPC version (inheriting it from contractors whose contracts weren't renewed, and who literally used variable names like ldkopqvzc and ldkopgvzc and yes differing only by a q and a g in the middle of word salad is a real expample). They'd basically programmed for job security and _dared_ IBM to fire them, and their bluff was called and I inherited the mess. And they wanted to port this to the x86 version. I was 23 and had been programming since age 10: I took a flamethrower to it, unintimidated. My team lead Pete did his best to shield me from management's notice.

On the one hand, Feature Install was a package management system, a brand new idea at the time (Linux distros have rpm and dpkg but other operating systems generally _didn't_, you extrated zip files and extracted new zip files over them when you got an update). Having a package management system is great!

On the other hand, Feature Install was part of their object oriented desktop code (the "workplace shell", based on IBM's System Object Model), and the original idea had been to subclass a file folder so that when you dragged and dropped it from removable media to the desktop, it would install the package! And then they hit the problem that most of what they wanted to install didn't fit on a single floppy, so had to be split up into multiple disks, and from then on the implementation was fighting against its original design idea. And now they wanted to use it to install the operating system, which meant we had to bring the desktop up before we could install the OS, and the desktop code was NOT designed to work that way...

It was a mess. I had my hands full making it work at _all_. I still remember staying late and coming in on weekends doing a massive cleanup/rewrite of some of the fundamental plumbing (because what was there DID NOT WORK), and having a big flag day rewrite replacing over 1/3 of the codebase with 1/10th as much replacement code (making it table driven, this affected the GUI that edited the fields and the code that used the fields and the save/load logic and made it all consistent and happen via a common codepath)... and my team lead telling me to check it in on a sunday so management couldn't object...

And then it turned out my manager was there on that sunday, and walked by asking me what I was doing, and then told me not to do it. And then TWO DAYS LATER the testing results of the old code came in and failed spectacularly (one metric took 20 minutes to do a thing my code could do in 3 seconds) and the manager went "oh, we have a performance patch" and told me to check in my code... and took credit for 3 months of my work as a "perormance tweak" he'd done in response to the bad test results.

That manager's name was Kip Harris. I _almost_ quit right there. He caused 50% of my department to quit (they couldn't transfer to other parts of IBM due to the relocation, so they left IBM. This is back when that basically didn't happen). He was then demoted out of management, and the new manager (Jim Segapelli) had a reconciliation meeting with what was left of the department and gave me the biggest raise he could (something like 15%, although as a recent graduate it was 15% of $36k/year so not _that_ big) after the previous manager had given me the lowest performance evaluation ("more is expected", although that was partly because IBM had a quota system for rankings, copying Microsoft's stack ranking which started as a way of doing stealth layoffs and then got retained permanently for a decade, which IBM copied because microsoft was doing it).

1996 was not a fun year for me. They gave me a pager, and used it between midnight and 4am three times in the same week. And I did not get paid overtime.

Anyway, IBM had us heads down doing OS/2 4.0, which kept us too busy to notice it was too late for whatever we shipped to matter. In November 1995, Windows 95 shipped, which was still terrible but _less_ terrible than the 3.x line. Instead of crashing hourly it crashed daily, which meant it was approximately usable, which made it the first version of Windows that its entire customer base _wasn't_ looking to actively replace on a daily basis. I.E. the market window for OS/2 to become the dominant PC OS had closed, but we didn't let ourselves acknowledge that until we shipped our own thing most of a year later.

When OS/2 4 shipped, we were finally allowed to interview elsewhere in the company to look for another project to work on. I interviewed for an AIX position working on X11 (because it sounded like fun and would make me learn graphics and the guts of that X11 stuff I hadn't been able to get to work properly under Linux), and the manager told me to my face that I was too young and shouldn't work on this dead-end Unix stuff because it was dying. I brought up Linux... and he hadn't heard of it.

I knew he was clueless. I was still following Linux development, or at least I pulled up the web archives of the Linux development newsgroup from time to time and conirmed it was chugging along. But I couldn't get it to run on any hardware I had. I installed the OS/2 port of XFree86, which is where I first encountered the sparkling/tearing problem XFree86 had with western digital graphics cards. I was thinking maybe I could learn enough to fix it... but the manager dissuaded me from taking the position. Even though I thought IBM was wrong about X11 being dead, if IBM thought it was a dead technology then IBM would sabotage its own version of it and I'd had enough of fighting IBM management to learn and do things.

Tomorrow: My Java years.

July 14, 2018

We're coming up on the 20th anniversary of my return to Linux. I'm in a reminiscient mood.

Summer of 1982, between 4th and 5th grade for me, my family moved from Kwajalein in the Marshall Islands to Medford, New Jersey. (Talk about culture shock... Urgh.) Every summer on kwaj we'd spend a month visiting the mainland USA, traveling around, and before settling in New Jersey we did our usual round of visiting relatives, which means over that summer I saw grandmother's Atari 800 with a basic cartridge and floppy drive, and I was HOOKED. (My father put the floppy in 90 degrees off from where it should be and I figured that out by the scratches the read/write head left on the sleeve.) When we got home I wrote basic code in a notebook with pen and paper, writing a program that was a series of print statements explaining how basic worked. (I was 10. My fascination got a bit meta.)

I pestered incessantly for us to get a computer, and Christmas 1982 my father got the family a Commodore 64. (Which was not an Atari 800 but I had no say in the matter and Sears was having a sale.) By new year's the "family" computer lived in my room. Probably for my birthday in February I got Zork I, although mostly I played pirated games copied from friends. (I was a kid with no money, I wasn't exactly costing them a sale and I knew it. I bought what I could, which wasn't much.) A couple years later I started dialing in to a local bulletin board system "The Realm of the Dragon II", running Dragonfire BBS software which its sysop had written in C128 Basic. (The computer came with a 300 baud modem that plugged into the cartridge port, and a card for some free hours of compuserve, but I never used them because I couldn't pay for online time and didn't want to use up a resource I couldn't replace. A friend borrowed the modem for a week a few months later, and from them I learned about local bulletin board systems we could call for free. It took a while to convince my parents to get a second telephone line, but tying up the first one a lot eventually did it.

I wrote my own terminal program (in basic, compiled with "Blitz!" so it could keep up with my 300 baud modem) I called "junkterm" because I knew I wasn't much good at programming yet. My online handle was "Greeny", spelled in orange using C64 graphics. (I needed a cable from the sound output to the modem to do touch tone dialing, so I taught it "pulse" dialing by rapidly hanging up and picking up the phone, timed with for loops.) And then following up on that I wrote 3 different bulletin board systems in C64 basic, but didn't really run them except briefly for testing, because I only had one computer and didn't want to tie it up (the system had 65536 bytes total address space and 39811 basic bytes free when the OS ROM was masked in and running. Multitasking was not a thing in that context)/

I did everything in BASIC. I dabbled with assembly language a bit but had a really hard time debugging it. (Really machine code, my "assembler" was typed in out of a magazine and wrote to memory directly with hardcoded offsets for all the jumps... I think that's a "machine code monitor" but it's easy to be unclear on the distinctions when you've never encountered all the different options.)

My friend Chip (he picked it to replace "Lawrence") had other computers (he'd started on a TI 99/4a and moved on to a 16 bit PC running DOS), and in 1988 that friend ran a WWIV bulletin board system on a 16 bit XT clone that I helped him apply "mod files" to. WWIV was shareware that gave you full source code when you registered (which you would then compile with a pirated copy of Turbo C, and later Borland C), and a user community that had never heard of the patch command explained to each other in mailing list postings about the cool new change they did and how you could make THIS part of the code look like THIS instead... Yes I learned C by modifying WWIV source, and got a book called "From Basic to C" out of the library to try to understand what I was doing, then in 1990 I borrowed a friend's book called "How to Program in Turbo C" by Howard Schildt and _memorized_ it (to Roxanne's "Look Sharp" album, on repeat) to get full coverage and connect everything up. (Yes, I'm aware Schildt is considered a terrible programmer, but you've got to start somewhere and "the textbook is often wrong" is an important lesson for everyone. I also found a compiler bug in Turbo C my first year where an increment would get optimized away -- not in the assembly output -- unless I suck in a printf() to check its value, in which case it came back. I turned it into "varable = variable + 1;" which was not optimized away for some reason... And I was PISSED when I upgraded Borland C++ from 2.0 to 3.0 and suddenly my throw() function was now a reserved keyword. But I digress...)

In 1989 I skipped my senior year of high school and went straight on to Burlington County College (I still regret not skiping the first 3 years). I graduating from BCC at the end of 1991, and started at Rutgers spring semester of 1992. They had a computer lab full of Sun workstations in the process of switching over from SunOS to Solaris: very expensive and very unixy and yay C but the compiler on them was DEEPLY broken (it returned a "char *" from malloc(), what?) and I mentally categorized them as "big iron" that the PC would steamroller.

Meanwhile at home, I took years of christmas/birthday money plus everything I earned tutoring other students at BCC and bought my first 386 PC (pile of parts out of Computer Shopper), and put together my own DOS box and wrote my first bulletin board in C (including serial interrupt routines! Fancy. I got 8250 ones out of a book then looked up how to enable the 16550a receive buffer. Transmit still happened spinning a character at a time, because that didn't drop characters.) That one was called "chamelyn" (filename had to fit in 8 letters) which could theoretically emulate the UI of all the other bulletin board systems I'd used.

The way it worked was I created a very simple scripting language that was more or less an assembler for an 8-bit assembly language and the C had an array of 256 function pointers where it would for (;;) execute[code[position++]](); and then wrote the actual BBS part in the scripting langauge. And yes this means I'm one of like 30 people to independently reinvent "bytecode", and when I found out about Java later I (A) realized I'd been working on it about 6 months longer than Sun had, (B) pointed out they'd missed truncate() and couldn't shorten an existing file without deleting it. (Sun Engineer "Mark English" replied to my email that I was right and I'd just missed the Java 1.0 cutoff but he'd add it to 1.1.) I then spent a lot of evenings implementing a java version of the "deflate" algorithm from info-zip, and had just started work on the decompression side when 1.1 came out with zlib bindings in the standard library working about 10 times faster than my java native version. :P

Anyway, in 1992 I found out about C++ in my language survey course at Rutgers (which coverd Lithp, prolog, and C++), so I started another bulletin board system from scratch in C++ called "xblat" and I got THAT one connected up to fidonet. (Wrote my own fidonet tosser/packer, but used binkleyterm+zmodem as the network front end. Somewhere I have a random 9999 messages from the fidonet SF echo from the early 90's archived, because the message database cycled itself and that's where I moved and unplugged it, and never bothered to set it up in the new place because I got dialup internet through

The other change between chamelyn and xblat was I got a copy of Desqview (finally, multitasking! Run the BBS while using the computer!) which didn't work with my serial interrupt routines (Desqview had simple round robin scheduling and if the serial drivers were within an image then the interrupt was blocked when that image wasn't running, so there were multi-milisecond gaps with no serial port service, and characters got dropped all over the place), so I had to get FOSSIL drivers working for the serial port and teach my BBS to use them. (Binkleyterm already knew, and could do the baud rate setup and such.) The trick was the FOSSIL drivers ran _before_ desqview, so the interrupt routine wasn't managed by the multitasker, and the "gimme the data you've collected" call still worked from within a managed DOS instance...

This is the context in which I encountered Linux: in 1993 the 4 sls floppies came across fidonet, and I went "this is very interesting, this is a whole operating system distributed the way binkleyterm and zmodem are, it's like WWIV except you don't have to buy access to the base you apply all your local changes to meaning it's hobbyists all the way down. But... why would anyone want to clone a Sun workstation? It can't run DOS programs, so I can't run my BBS or games or any other existing program I have under it.

I took it over to Chip's house and we tried to install it on one of his PCs and xfree86 WOULD NOT WORK with his Diamond Stealth Multimedia graphics card (well, mcga 320x200 mode was the highest resolution it could do). I dug into the Linux mailing list archives to see if I could work out how to get it to work, and they said manufacturers wouldn't release programming information and that this would probably never get fixed. (Years later "trying to get X11 working on any graphics hardware I had" would be the main blocker to switching to Linux as a desktop on both western digital and SiS chipsets. If the fork had happened 20 years earlier maybe Linux on the Desktop would have too... But I digress.)

Circa 1993 my professors at Rutgers were talking about how the big thing coming up was SMP (because linear CPU speed improvements had to end _sometime_), which would require threading to take advantage of, and I dug into the archives for that too... and found that none of them had SMP hardware (too expensive) and a long post by Alan Cox (#2 guy in Linux) explaining that threading was a bad idea and shouldn't _be_ supported...

But meanwhile, DOS was clearly on its last legs. The 640k barrier and lack of built-in multitasking were a PROBLEM. (I eventually worked out exactly what was going on and co-authored a paper about it, but at the time it was just the "smell of death" anyone who grew up on an 8 bit system learned to sense around a platform it was time to move off of.) Meaning I wanted to move from DOS to _something_ else... and Windows 3.x was BAD CODE, being too unstable for words was just the SURFACE problem. And Microsoft had gone full-blown evil trying to shove it down everyone's throats, to the point I installed IBM PC-DOS then DR-DOS to not be running Microsoft's version.

I wisted about Desqview X but never found a copy, and when OS/2 3.0 came out (the 32-bit 386 rewrite) I _bought_ a copy (real, actual, legitimate with money!) and installed that. It supported SMP and threading, IBM was seriously pushing it for home users (with the "nuns" televison ads and so on), it had good backwards support for DOS with a cleanish 32 bit migration path...

This is long, I should cut it here and pick up later.

July 13, 2018

My script currently builds an i686 cross compiler (dynamically linked, with lots of optional features like thread support disabled), and then builds _another_ i686 cross compiler statically linked with the options switched on.

I thought I could simplify this script so you can go "~/ m68k::" and it would build the -host cross compiler and then build the static m68k cross compiler with that. I tried it. The m68k cross compiler build broke in the gmp build because one of the components segfaulted when built directly with simpler cross compiler.

Bravo, gcc developers. You know my rant about how a compiler isn't fundamentally different than a docbook to pdf converter? While that remains true, the FSF is fundamentally terrible at software and screw up everything they even peripherally touch. (Also, cross compiling sucks because the number of hosts is _multipled_ by the number of targets when working out how many different codepaths need testing. And documenting _why_ you need to do these elaborate rube goldberg build setups is kinda horrible. My motivation here was to simplify the build so I didn't have to _explain_ it...)

On that note, glibc is appalling.

July 12, 2018

Listening to talks about David Graeber's "Bullshit Jobs" book, and I wonder how the resource curse works into this. You can't go on strike if your work isn't needed, so countries where the economy's based on things like oil revenue, and the work of 99% of the population does not significantly contribute to the tax base, have horrible human rights records because the government doesn't need the consent of the governed. They just need them to stay out of the way while 1% of the population makes 90% of the money from 1% of the labor.

Today we're automating away entire industries, and our last few recessions we've had the opposite of a labor shortage. You can't go on strike if the boss doesn't need your work.

Can automation cause the "resource curse" in the united states? How do we get to a Star Trek style Universal Basic Income future when capitalists corner the market? All the historical precedents involve torches, pitchforks, and guillotines. This time around everybody's waiting for the boomers to die before reassessing the situation. And thus we get a holding pattern...

(And the _fun_ part is the way capitalism's set up, if nobody can afford to buy anything you get a demand limited liquidity crisis that makes all the companies lose money, and then the financial sector privatizes gains and publicizes losses triggering a federal bailout to print buckets more money and give it directly to rich people. That's where we are _now_. Awkward way to run a railroad, innit? At what point to we admit capitalism stopped being a functional thing back about when Ronald Reagan cut taxes on the 1% from 70% to 28% and exploded the national debt, and that the whole edifice has been a prolonged exercise in deficit financing ever since? 90% of the modern economy is completely imaginary, the assets only exist on paper, the jobs are useless busywork, and it's mostly just a way for billionaires to feel in control and on top... well, that's what the book is about. "Why are we not guillotining those clowns again?" is a legitimate question. "Because they're septagenarians who will die soon on their own" is literally the current answer. Which means 20 years from now is gonna be... interesting times.)

July 11, 2018

I've previously mentioned my corollary to moore's law, that 50% of what you know about programming is obsolete every 18 months. And that the reason for the longevity of unix is that it's mostly been the same 50% cycling out over and over ever since midnight, January 1, 1970.

The flattening of moore's law's S-curve has slowed but not stopped this cycle, and I'm waiting for systemd to go the way of devfsd and hald. Some bits do flake off over time (sccs->cvs->svn->git), but the portions of unix that are "old but still relevant" are as close to universal constants as we get in programming, and yes I throw the C language (but NOT C++) in that pile.

*shrug* Time will tell, but fragmentation seems less likely to form a new plateau. If lua had been the browser language instead of javascript the world would have moved to a new baseline already. (C is from the people who made Unix. Go is from the people who made Plan 9. Sure, they're the same people, but context _matters_...)

(C is a portable assembly language, however much the C++ folks shriek Luke Skywalker's "No, that's not true, that's impossible!" line every time somebody points out the obvious. And strive to sabotage compilers with endless Undefined Behavior to try to screw it up. Just admit that signed math is two's complement and make LP64 part of C20 already...)

July 10, 2018

My domain has been up for over a decade, and has a reasonable google rank, which means I get weird SEO emails all the time, which aren't just pure bulk spam but at least lightly targeted.

I also have some content up there like the history mirror, my old motley fool articles (which I've meaning to properly index forever), and the kdocs staging are from back when I maintained, none of which were originally written for this website but are basically just mirrored here. And this tweaks people who want to ADD to them, somehow.

Today's is:

On 07/11/2018 09:59 AM, [NAME] wrote:
> Hi there,
> I wanted to reach back out regarding my previous email (attached below)
> about [SITE]'s newest article, "Why Women Should Invest and How to Get
> Started".
> Many women assume that investing either requires expertise, a lot of time,
> or large amounts of money, but that's not the case! Our article highlights
> reasons why investing is important and more profitable than traditional
> savings alone while helping women craft a strategy and find an investment
> platform that works well for them.
> I believe that our article would be a great addition to your page here:
> I have included our article for your reference:
> [URL]
> Please don't hesitate to reach out if you have any questions, I hope to hear
> from you soon!
> Best,
> David

And I replied:

20 years ago I wrote stock market investment columns for The Motley Fool. I have mirrors of some of my old columns on my personal website. You're asking to add a thing you wrote, which is already on your website, to my mirror.

This seems... odd?


I'm sure there's some weird exploit going on here, but I dunno if it's SEO rank harvesting, or a cross-site-scripting exploit, or a page that's going to be innocuous for 3 months then start serving viruses, or...? (I get requests like this a couple times a month. And those are just the ones that make it through gmail's spam filtering...)

July 8, 2018

I have a dozen things queued up to do and what do I spend time on? Fixing ping. And not even the improved error reporting suggested on the list, but reviewing the code to see where that should fit in and then testing corner cases shows me the error reporting stutters (division by zero error in the summary display logic when no packets have been returned, which triggers a signal that... calls the summary display logic again at exit.) And -c isn't working when you ping a site that doesn't reply (it's limiting returned packets, not sent packets), and it looks like -w isn't working (haven't tested yet, just reading the code and seeing a dodgy "else" handoff)...

July 6, 2018

Upstream Linux broke LED platform data on sh4 when they converted it to device tree. They still allow platform data to pass in a pointer, but they changed the structure type that pointer dereferences, and the definition of the structure is local to the driver consuming it so the platform data CAN'T provide it. (There's still a generic structure providing all the info in a driver-agnostic way, but the device tree conversion changed the code to no longer _use_ it.)

And of COURSE the device tree guys' response is "convert everything to device tree, you have no other choice, we're part of systemd now" or some such...

*shrug* I have a local patch that makes LED platform data work again without breaking device tree (I think, haven't got a device tree thing to test but they obviously never tested platform data so...), and I posted that patch, and I'm using that patch. If they don't want the patch, vanilla can stay broken. As usual, add a todo list item to poke 'em again in a year to see if whoever was objecting has died yet.

July 5, 2018

Looking at watch again: "watch ls --color" prints "^[01;34mandroid^[0m" and similar. And "watch 'date ; sleep 5'" produces no output for 5 seconds, and then updates the display every 5 seconds.

That's... fairly simple behavior to implement. It's also wrong, I want toybox to do _better_ than upstream here.

(Sigh. I always get screwed up by singular/plural in directory names. Is the url "download" or "downloads"? Is it "tests/files" or "test/file". I _fairly_ consistently use plural, but not quite enough to get it right, and lots of directories in URLS and packages and such aren't mine anyway.)

And how does "quoting arguments" work? According to strace, "watch echo 'one  two'" becomes exec("sh", "-c", "echo one  two"), which results in an output with just one space between the one and the two. (The downside of washing the command line through sh -c, passing through argument quoting becomes darn near AI-complete. Looks like they basically don't try to get it right.)

July 4, 2018

Went down to the beach yesterday evening the evening to watch fireworks. It was really crowded. My apartment's like 3 blocks west of lake michigan, and the "beach" here is basically a corrugated metal barrier between the water and land, marking an abrupt edge of the land a couple feet above the water, stretching for farther than I explored. Swimming would not appear to be a design critera.

There were a bunch of food trucks and people selling glowsticks. And many, many people. (Everybody gets the actual 4th off, so that was the night when nobody had to be up early.) Nothing quite as tiring as being alone in a crowd. Went home after about 5 minutes of fireworks.

Day off today, poked at toybox stuff. Didn't get as much done as I wanted, but then I seldom do. Looming end of available time limits what I'm willing to start on, most ratholes are too deep to go down, I have to leave off halfway and then next time I've made new work for myself rebuilding the context where I left off. It's not bad if I know I can get back to it the next day, but I usually can't. So I pick at things with little or no momentum.

Oh great. Fuzzy drove to a drafthouse rolling roadshow (Independence Day, with fireworks and flamethrowers, at a "stunt ranch" half an hour outside of town), and had to swerve to avoid a drunk so the car crashed into a ditch right outside the venue. Some people from the stunt ranch helped her get it out of the ditch and into the parking lot, she stayed for the event, and then tried to drive a damaged car home.

Now she's in a parking lot 15 minutes outside of austin, the symptoms sound like the engine had an infarction, and it's after midnight on a holiday. I've found a 24 hour tow service and a mechanic she can drop the car at (google maps' THING near ADDRESS search remains lovely, living in the future), but there's a _reason_ I check for anything dripping under the car after even a fender bender, and then I'm really careful driving and ready to pull off at the first strange noise or smell or guage different from usual until I can have a professional frown expensively at it. (They train in front of special mirrors.)

Sigh. The car already had issues. Check engine light's been on for a year in a manner that stumped even the dealership, which wanted to do $3000 of _unrelated_ work on it. But other than needing to add a new container of steering fluid every 6 months it's been fine, and we don't drive it much.

All the research I've done says not only will app-summonable self-driving electric car services become available in all major cities over the next 5 years (at a monthly cost cheaper than owning an already paid-off car), but somewhere around 2025 the decline in gasoline demand will cause the gasoline supply chain to dry up and blow away. All the refineries and tanker trucks making daily deliveries to gas stations are operating on razor-thin margins optimized over the past century, and they don't scale down easily. It doesn't take much drop in volume for profits to fall below fixed costs there, and a supply chain collapsing due to unprofitability before all its consumers had weaned themself off of it happened in Australia in 2016, causing rolling blackouts in electricity generation. Of course the buggy whip manufacturers coal billionaires blamed the technology that had rendered them obsolete and sponsored hit pieces, but as with the buggy whip manufacturers they still lost.

Without that supply chain, gasoline becomes something you basically mail order, like getting liquid nitrogen to make ice cream. Call somebody up and a truck delivers a large cylinder to your driveway the next day, and picks the empty cylinder up again a few days later. (I suppose if you own a tank it would be more like natural gas for barbecue grills, or furnace oil delivery.)

The point is, in about 7 years gas stations go the way of CRT televisions, CD players, and landline phones. I'm reluctant to buy another gasoline car under those circumstances, but the electric cars are still too new to be available used. A new car gas _or_ electric is at least $30k, more than I want to spend in this situation. (About like installing a sattelite dish when you know cable modems will reach your neighborhood in a couple years. I limped along on dialup until the new thing was ready...)

I'm currently up in Milwaukee and Fade's up in Minneapolis, neither of us were using the car. I let my driver's license expire in 2013 (didn't want to pay an extortionate ticket to Stafford, Texas) and didn't renew it for ~4 years until a friend needed someone to help her move in another state. (The entire time I was working at Pace I took the bus there.)

Really the one this impacts is Fuzzy. I'm pretty happy to wait for Waymo to put Uber and Tesla out of business. The first real-world self driving trials are underway and showing up in my twitter feed. Soon owning your own car should be like owning your own milk cow, and knowing how to drive a car like knowing how to ride a horse. You can, once upon a time most people did. And then it became a very expensive hobby supported by adjacent hobbies. (Auto mechanics in about 20 years will probably be like farriers are today, and there aren't a lot of feed stores, watering troughs, and hitching posts downtown anymore.)

July 3, 2018

I emailed to ask if Google might be interested in buying a "toybox support contract", and heard back that although it's not the decision of any of the people I regularly communicate with (they're all engineers)... the answer is basically no. Google doesn't like to use vendors, and the corporate side doesn't see the point in investing in a command line that already does what they think they need.

*shrug* Can't say I'm surprised. I got turned down when I first asked if they just wanted to _use_ it. If I was easily discouraged I wouldn't have gotten this far. But having to keep digging away in my own time to solve these problems in spite of the large institutions is painfully slow, it would go so much faster if I didn't have to spend most of my time and energy working on something else entirely to support a family.

Large institutions have a different mindset than individuals. I need to do a proper version of my 3 waves talk. (I'm still sad that when I _did_ a proper version, which I was pretty happy with right after I'd given it, Chicago's Flourish conerence never posted the recording. Sigh. I could propose it as a talk at another conference, but I haven't been doing that this year because I'm sick of hearing from white guys and don't want to take up a slot that should go to someone else. Really I should just record the talk myself and put it on youtube where it's not _my_ fault it's completely ignored. (Yes, I say that with the original article series on The Motley Fool having gotten something like 17 million views the first year, being reprinted in their "popular articles" section years later, getting third party commentary, etc. My brain does not accept this.)

The problem is without an externally imposed deadline and an audience/editor who would be disappointed if I didn't finish by deadline, it stays on the todo list forever. (I put "podcast" as a patreon stretch goal in hopes of creating a deadline/audience without having to scrape up several hundred dollars to travel at my own expense to some hotel in some city where half the time they WON'T RECORD IT ANYWAY. Grrr...)

July 2, 2018

Google Maps' "keyword near address" search mode is very useful, as are (as kbspangler calls them) "hired dudes".

I found a handyman near my apartment to install the air conditioner that's been sitting on the floor next to the inflatable mattress for a month. Somebody with actual tools who could get the screen out and maybe would have insurance if they dropped the air conditioner into the alleyway and it hit something expensive.

They didn't, and it works fine now. Not exactly an _elegant_ install, it doesn't quite fit the window without styrofoam and duct tape. But... close enough!

July 1, 2018

I wax rhapsodic about universal basic income on twitter partly because "retiring to do open source full time" seems peverse. (I could accomplish so much more if I didn't have to do all this other work!)

But also because there's no NEED for most modern work. We've long since automated away the "subsistence farming" jobs: 200 years ago 80% of the population farmed, now less than 1% do, then manufacturing similarly declined, now we're all doing made up jobs because "employment" is a good even when it's "stand out in the street and hold a sign reminding people our restaurant exists". David Graeber's got a new book on this I need to finish reading, but his original 2013 article on it remains a pretty good summary. (The book just has a lot more supporting data, analysis, elaboration, historical context...)

Starting in the 1970's computers started seriously automating away clerical jobs: the "typing pool" at large companies became word processors, desktop publishing software took out the typesetting profession, drafting blueprints isn't really a thing anymore, tabulating spreadsheets used to be what accountants did all day and now it's the name of the program that does it, nobody "gets their start working in the mailroom" at a company with email, the only reason tax preparation is still people instead of a .gov web page is the tax prep companies spent huge amounts annually lobbying to keep their jobs as unnecessary middlemen...

Now the fossil fuel industry is switching over to distributed solar panels and batteries (that's 1/6 of the economy), self-driving electric cars are automating away taxis and truck driving, and if you add in a short haul delivery drone going from truck to house you've got mail/package/pizza delivery sorted soon.

End result is that almost all the jobs are optional, everything that really _needs_ doing doesn't even add up to 1/10th of the population. The "but what will people eat" objection ignores the fact food is so cheap you can't make a living _providing_ it except at really big scales with razor-thin margins. The current estimate is housing all the homeless people in the country would cost about $10 billion, and the military _misplaced_ that much money in Iraq. (As in "lost shipping containers full of cash", there were Leverage episodes on this and yes it was a real thing.)

The big pending unmet demand is for the surge of baby boomers needing hospice care, but HMOs bought up the independent medical practitioners in the 80's and it all turned into for-profit corporate conglomerates that treat employees as disposable. The certification requirements there mean you drown in student debt to get permission to change bedpans. That demand remains unmet for structural reasons having to do with capitalists cornering the market, regulatory capture, insurance industry middlemen, the AMA acting as a guild limiting membership via medical school quotas starting back in the 1970's, and so on. Basically "corruption" but with huge marketing and lobbying budgets to avoid anyone calling it that.

Royalty went away. Serfdom went away. Guilds mostly went away. Capitalism can join the heap, we just have to wait for the boomers to die first because they're too old to fit a new idea into their collective heads. (Well, 2/3 of them are. I'm aware it's #notallboomers, but it's most of them.)

Capitalism used to be the solution. Then it became the problem. The wheel turns, old story...

June 30, 2018

Went to the big protest in front of the courthouse, somebody was handing out posterboard squares and letting you use a market to make your own sign, so I did "ICE at the poles, not at the borders". (Something somebody said on twitter.) I brought a powerade bottle, but it wasn't enough.

It was nice, the only dip was when they had a grey haired old white guy speak (we've had enough of that, thanks, and I say that as one), but that was just one speaker out of a dozen or so.

Next time: sunscreen, two water bottles, make a sign ahead of time.

June 28, 2018

Politics is horrible and draining and relentless, but while doing a version march from 3.18 to 4.14 at work I hit "Temporary per-CPU NMI log buffer size" showing up as a new config option under the RCU Subsystem and still had a burst of rage wanting to smack the linux-kernel development community at large for unnecessary overcomplication.

So there's that.

(A version march is where you try the release versions in sequence because the config and board support patches need so much manual adjusting every couple releases a bisect is crazy to try to get anything coherent out of. I stopped at 4.14 because 4.15 has some intermittent flash/jffs2 corruption bug that eats the filesystem if you write enough to it, but is intermittent _enough_ it's annoying to track down. No idea why it's happening. Stopped at 4.14 or the moment, that's relatively recent.

June 27, 2018

Got the new m68k toolchain and target built in mkroot. Needs vivier's qemu-m68k, but otherwise pretty much works.

June 26, 2018

Work's got a giant multithreaded application that's trying to do realtime tasks, and they have a shared library that calls system() and popen() to run subtasks. Naturally: that doesn't work right.

If you fork() from a thread, the normal copy-on-write semantics don't apply and the new process has to copy all the process's memory to the new PID, which can easily be dozens of megabytes. (The problem is threading is already sharing it, and having separate processes _and_ threads share the memory would require each page to have _two_ reference counts to keep track of the different categories of sharing, which would bloat the page tables.) Forking takes locks that block system calls and page faults in the parent process, which is not a problem when a single-threaded parent process hasn't returned the fork() system call yet, but it means other threads of that process can't do it either. Copying lots of memory (dozens of megabytes) takes a long time and happens under these locks. Between the two of them this manifests as a large latency spike in the existing process's other threads while the fork() is happening. We were measuring 70 miliseconds on an embedded board.

The fix is to use vfork() instead, which only blocks the parent _thread_, not the parent processs. It doesn't copy memory (the parent and child use all the same mappings, even the stack, with the parent blocked until the child calls exec() or _exit(). Of course using vfork() properly is tricky (mostly because very few people these days seem to understand what it _does_), so I wrote a vsystem() and vpopen() based on vfork that they can replace all their system() and popen() calls with. (Luckily, I'd already done most of that for toybox so I could crib from my own work.)

Way back when I wrote up a couple pages on vfork() for the busybox FAQ, but after I left the busybox guys crapped a long busybox-specific digression into the middle of it (something about configuring and building the busybox binary?) that made it useless for explaining vfork to people. I should dig up my old text from before they broke it and put it in the toybox FAQ so when I want to explain "why vfork" I can link them to an existing thing. (It's on the todo list. I don't have as many half-finished FAQ entries as blog entries, but it's a similar category of problem.)

June 25, 2018

Swapped out my phone battery with a new mail-ordered one. The new one lasts MUCH longer.

Making another try at an m68k toolchain build. It's being stroppy.

Editing another batch of blog entries. I got up to an entry that trails off halfway through a technical explanation, which is where "editing" turns into large amounts of "new authorship". But it's a writeup of what I was thinking at the time, so it counts. (Autobiography is seldom written live as it happens.)

(Then again I left myself a "Did my old realtime Java GC idea ever get implemented?" note as the whole blog entry a week ago, and that's a longish writeup to properly explain, which I haven't done yet. My blog has significant technical debt.)

June 24, 2018

Cut a toybox release yesterday, so today I'm tying off some things I bumped until after the release. One of them is splitting up the ps help text so "ps -o help" (or any unknown -o field) shows the field list, and the normal ps --help is the other half.

This of course led me to thinking that if I'm breaking this into a function anyway, what I should REALLY do is move the help text snippets for each option into the option array, and just have a function traverse it to generate help output. It would have to make a couple passes because I have the six different variants of "show the command name/line" broken into their own sections, but that's easy to detect/categorize in the table so sure.

Except there are two multiline entries in the table, and it might make sense to move those into a third section. And this led me into looking at the list of "S" field output, which has a bunch of magic letters that mean stuff, and what the heck is "wakekill" anyway, so I went down the rathole of trying to figure out where I got that from, and... it doesn't seem to be in current kernels?

Table 1-4 in Documentation/filesystems/proc.txt just says "state (R is running, S is sleeping, D is sleeping in an uninterruptible wait, Z is zombie, T is traced or stopped)" which isn't all of them, so I dug down into the source until I got to fs/proc/array.c task_state_array (line 140-ish) and that's got RSDTtXZPI but still no K... Ah, here's wakekill.

June 22, 2018

I have no been away from the cats long enough to miss them a little. Not enough to want to have any up here in Milwaukee, but enough that I am not Actively Overcatted.

(I wanted to upgrade from pets to kids a decade ago, but that's not how my life turned out. Oh well. Cats started registering as "this is what you got to have in your life instead of children" and it got old.)

June 21, 2018

A really good tweet led me to read about the history of dinosaur extinction research, and science mainly advancing when old fogies die remains very true. As does the observation that when someone's conclusions remain constant but their professed reasons for coming to that conclusion constantly change (same result, different justification), something is wrong.

I remember one of my school teachers had a newspaper article about this, back around 1990, which was quite detailed about the decades of research the local Yucatan oil drillers had put into examining this big crater they'd found, and their inability to get rich white english speakers' attention, and then I'd tell people for over a decade afterwards "oh yeah, they found the dinosaur impact site, it's just off the bumpy bit south of panama" and nobody believed me.

Same with the story of the guy going "no seriously, H. Pylori bacteria cause ulcers, you can treat this chronic disease with antobiotics" and the rest of the medical profession ignoring him despite that one being really easy to test. (Everybody thought ulcers were caused by stress, it was a plot point in the 1979 Disney movie "The North Avenue Irregulars" I had on videotape growing up, but in 1984 he ran an experiment proving it couldn't _not_ be a bacterial infection no matter how much senior medical professionals insisted it was stress because it had always been stress, and he got the Nobel prize for proving them wrong 20 year later after enough of the old fogies he'd offended by having the facts on his side finally died.)

The interesting part of the long wikipedia history of the extinction research above is how many different ways it got confirmed with people still flipping out. (Lots and lots of "3/4 of all species go extinct at the K/T boundary layer" and "irridium at the K/T boundary layer consistent with asteroid impact found at dozens of sites around the world" and "there's this K/T boundary layer AROUND THE ENTIRE WORLD, seems like a thing" being responded to with "crater site or it didn't happen". I grew up with that question for about 10 years and then "Oh look oil drillers found a HUMONGOUS CRATER the same age as the K/T boundary, they found it years ago but they were poor spanish-speaking brown people in south america who sent letters to white dudes that the white dudes never opened, and presented at conferences that no white dudes attended. Funny that."

The important part is that the people insisting the world was the way it wasn't, and who had seniority and power and could make everybody ACT like the sky wasn't blue, finally died. And then of course painfully obvious truths wer acknowledged; the emperor can have no clothes once he's in the casket.

(Today's "huh" is actually the diabetes vaccine, although a decade ago they thought injecting capsaicin into the pancreas would shock it back to normal so who knows. "Everybody's wrong about this" doesn't mean they _are_. Just that you have to remain open to new evidence.)

June 18, 2018

We're seeing garbage collection latency spikes in the mono app at $DAYJOB, which is frustrating because I designed a realtime garbage collector back in the 1990's. I wonder if it ever got implemented?

Hmmm, I'm not finding a writeup of my old idea to link to. (The blog I had at the time was on, it's not even in the wayback machine and I lost those files when a Zip disk got the Click Of Death.)

Back in 1997 I was thinking about both java GC latency spikes (the screen saver in IBM's powerpc port of JavaOS would freeze visibly every couple seconds), and how to extend java references to 64 bits (because clearly that was coming after we dealt with all the Y2K bugs). The obvious way to do the second was to have the actual reference be an index into an array of pointers (and when I later described this idea to a professor at UT he said that's called an "object table"), and it occurred to me doing that could make garbage collection fully asynchronous.

Garbage collection needs to know "is this reference still used or not", which is one bit of information. So have a bitfield with one bit per array index, and at the start of garbage collection memset the bitfield to zero. Then as your garbage collector walks the global variables and down each thread's stack, set the bit of each reference you find (basically "thingy[index>>3] = 1<<(index&7);").

The trick is during garbage collection, any java "assign a reference" bytecode should set the bit for the reference it just assigned to a new location, because otherwise it could move it "behind the back" of the garbage collector to somewhere it's already checked. That way you don't have to freeze all the threads to run your garbage collector: the still-in-use bit gets set either way.

There are of course implementation details. You might need as many as _three_ bitmaps to do this (one an allocation bitmap when creating new references, which becomes the "output" of the gc process, a second for the references your GC has confirmed are still used, and a third for the ones you've found but not recursively looked at the dependent objects for yet).

And of course there's a bit of fiddlyness to make sure you don't walk off the end of the stack of a thread as it returns from functions, but as long as you bounds check the values as not off the end of the object table, worst case scenario of reading garbage that you think is a reference is you just misidentify a lost object as still used until next gc run. That could create a false positive but never a false negative. (And if you do implement a bit of locking at the end of a single thread's stack to avoid that, it's still _bounded_ latency. You only ever need to freeze one thread at a time for the duration of examining _one_ function's local variables, which might be easier to implement as "when the GC in progress flag is on, returning from a function checks if the GC is in this function, and if so waits until the GC has left this function before returning").

Doing this would even let you pack your memory: copy an object's memory to a new memory location, then move its reference in the object table. All the scattered references to it in objects and local variables and such are just an index into the table, which doesn't change. The actual memory location's in one place that can be changed atomically. You can use mprotect() to yank the write bit from the old copy to pause threads that try to write to the old copy during the move and it's still bounded latency. Heck, you could fixup the write attempts in the fault handler _during_ the move if you wanted to, so the size of the object being copied doesn't affect the latency.)

Of course this was back before JIT, when bytecodes were interpreted in a loop, so switching between "doesn't set the bit" to "sets the bit" modes when you start a GC pass would be easy. (Setting the bit _all_ the time would be impolite to L1 cache.) And it's a realtime thing rather than an actual optimization (avoids latency spikes, but slower over all due to the extra dereference in every object access, although modern processors with large caches and deep pipelines honestly might not care).

But I got busy and didn't pursue it. (After all, if I could think of that obviously other people doing all the fancy JIT stuff had to be way ahead of me...) I did describe the idea to that UT professor I mentioned (well, adjunct faculty? I think he was the assistant for the Data Mining class I was taking), but I got busy and didn't sign up for any classes next semester and didn't see him again for months. (Which means it must have been fall of 1996 when I spoke to him, that's when I applied to grad school and took one Data Mining class, then stopped for several years before trying again.)

Amusingly, I bumped into the professor again (I forget his name; white guy, dark beard, barely older than me and I was in my mid-20's) a year or so later at one of the Mensa Thursday night dinners at HEB Central Market's little restaurant thing, and he said IBM (which I'd stopped working for by that point) had offered a low-five-figure grant to sponsor work on my realtime garbage collection idea... but I'd never responded to the email he'd sent to my student email address (which I'd never asked the university to give me, and had never logged into; I had a home email address, he hadn't used it). I assumed at the time IBM had implemented it themselves if they thought it was worth doing, and got on with my life.

And now 20 years later I'm hitting a problem I designed a solution for right after I graduated from college. Frustrating. (Obviously I'm not personally rewriting Mono's garbage collector from scratch any time soon. But dude, how is this not obvious?)

June 17, 2018

Back from austin. Jetlagged by the redeye flight.

Finishing up fmt.c.

June 14, 2018

Thursday already. Tempus fugerunt. Far enough into my week away from $DAYJOB that I start culling my todo list because it ain't gonna get done this time.

Last night I opened one too many chrome tabs and my netbook ran out of memory to the "can't move the mouse cursor for half an hour because it's swapping" level. Once upon a time Linux used to have this thing called the out-of-memory killer but people complained it might kill the wrong process, so now when it would have triggered it instead hangs, to the point it won't recover if you leave it running overnight, and you need to reboot and lose all open windows/tabs on all 8 desktops instead. About what I've come to expect from Linux on the Desktop: lateral progress. Stuff that used to work no longer does. (A companion to "Something must be done, this is something, therefore we must do it." The old Linux Luddites podcast motto: "Not all change is progress.") Anyway, rebooted and trying to figure out what I was doing now all my context's gone. And my blog serves its original purpose! (Notes-to-self: what was I doing again?)

Checked in the do_lines() semantics change with changes to sed and cut, found a bug in cut (regression from October where adding a pedantic posix compliance corner case broke real use elsewhere), complained on the list and fixed it. Still haven't gone back and redone fmt yet.

June 12, 2018

Spent most of the day watching Fade play Revenant Kingdom, my biggest achievement before dinnertime was getting out to the credit union to activate my new debit card. But I should get a couple hours programming in, so headed out to wendy's with fully charged netbook and phone. Let's see...

I want to switch fmt to use loopfile_lines(), and it cares about the end of files so I want to change the do_lines/loopfile_lines shared infrastructure in lib to pass a NULL line to flush, and it currently doesn't which means I need to adjust existing users.

Context: do_lines() takes a filehandle and calls readline() in a loop, passing each string to the callback function. The callback gets a char ** so it can NULL it out if it wants to keep it (caller frees the string otherwise), or it can assign (char*)1 to it to skip the rest of the file. Then loopfile_lines() takes a list of files (null terminated char *[]) and iterates over them, calling do_lines() on each.

The implementation requires a glue function between loopfiles_rw() to translate argument syntax (loopfiles_callback(int fd, char *filename) to do_lines_callback(char **str, int len)), and the glue function stores the real callback in a global variable. This is slightly awkward on two levels: 1) and means you can't nest two loopfile_lines() calls (which hasn't come up yet), and 2) the global isn't in struct toy_context. (Keeping it near the user vs keeping them collated so sizeof(toys) shows how much global data we're using, both have downsides, it's one of those things where the stakes are small enough the relative cost of the less-right solution isn't easy to see, so I'd wind up agonizing over minutiae if I tried to fix it. There's a few static vars in lib/*.c, but no toys/*/*.c gets to have any so at least there's a rule.)

Anyway: changing the semantics of do_lines/loopfile_lines to indicate end of file because something cares now, and there are two current users: The "cut" command is using loopfile_lines() and doesn't care about file breaks so I can just add an if (!line) return; at the start of its handler function. But the other user is sed, which is calling do_lines() directly and it _does_ care about file breaks, but only in -i mode. And it's doing explicit do_lines(0, 0) flushes. Hmmm...

June 11, 2018

I'm in Austin for a week, taking time off from work. (I don't get paid, but I get to recover, see everybody at home, and work on toybox.)

I finished up and promoted ping, and now I'm poking at fmt which logically should use the loopfile_lines() infrastructure, except it doesn't have a flush call at the end of each file. The only current user is cut.c, which means sed is _not_ using it. i should figure out why.

These aren't necessarily the most important commands, these are the ones that are closest to being done. I need to cut a toybox release this week, and I should get kernel patches resubmitted (week 2 of the merge window), and I should integrate mkroot into toybox, and I should figure out what to do about microsoft buying github...

June 8, 2018

Set my alarm early to pack, wound up lying in bed doing the "I need to get up" but not actually moving thing. (My flight's somewhere around 6:30 pm so leaving work a bit early.)

Trying to finish up everything in the world at work in the meantime...

June 7, 2018

I get emails. Today I got:

Hi Rob,

I have seen your video regarding building a Linux system. You have been doing an amazing work in the Open Source world.

Quick Intro (myself): I am an Embedded Software Developer, an Operating System enthusiast. Love and Passion for Kernel internals, building easy to use systems.

My Problems: I have tried may open source distributions. None of them would work for me satisfactorily. I run into many issues. Like,

> Sometimes I try to install some drivers (for some hardware) and it won't work. There would be endless compilation errors.
> Sometimes window will crash, some application may crash.
> One day or the other, something would be broken.

There are many more issues, I can not recollect all of them at this time.

I know it is open source software, but still as an independent user (software developer) I struggle a lot.

How about creating an operating system that just works. Idea is to create a Linuxbook (like chromebook) and a whole ecosystem for that. I see a very good market for that.

I have list of features in mind which can bring good from all the worlds in to one system.

I would like to know your thoughts on this.

Which is why I'm trying to find a good introduction to the dunning-kruger effect that ISN'T a pile of smug superiority. So far I've found this.

We all start there. "Thousands of other people have tried, but I haven't yet, how hard could it be?" That's the dunning-kruger effect. The domain expertise necessary to figure out how difficult something is turns out to be exactly the same set of skills necessary to perform the task, meaning anything you have no idea how to do seems easy.

And it's not entirely a bad thing! Linus Torvalds said if he knew what was involved in writing Linux when he started, he'd never have started. Linus dunning-krugered his way into doing Linux and then played "oh, we need to change this" for over twenty years now. He's _developed_ buckets of domain expertise as he went along, but his initial "I can do this" bravado was based on a one semester course followed by copying a toy OS. (Luckily his "I don't see why we need to do this" rejection of microkernel architecture turned out to be right because the ivory tower academics were teaching BS [LINK tanenbaum-torvalds debate], but the newbie being right and the old hand not seeing the forest for the trees was largely coincidence.

(Although the confrontation boiling down to the newbie saying "Ok, explain it to me" and the expert not being _able_ to is an important corrective factor in science. Has been for centuries.)

June 4, 2018

Last job I spun my wheels in pursuit of world-changing goals. I really wanted to Do The Thing and we never made decent progress largely due to factors beyond our control (politics among the board of directors screwed up our second funding round leading to perpetual understaffing so we were all swap-thrashing between too many tasks, and Jeff always shot down any short-term "funding from operations" approach which was all _I_ knew how to help with, because it wouldn't bring in enough money to matter with the burn rate the company had). That turned into Giant Bundle of Stress, the money dried up and I went into debt to keep doing it, and I'm still recovering from both.

This job I'm well-paid to do a small near-inconsequential thing each day. My Big Project over the past month was a kernel version upgrade. Nobody outside the company is ever likely to notice and even within it only a half-dozen actually understand what a given thing is for. (My last two issues were "show the right mac address in the boot console messages on startup" and "the board should remember the DHCP address it had last time and request it again next boot". The larger project is that we're migrating networked climate control systems for large buildings from Windows CE to Linux because CE on this hardware was end-of-lifed.) But the work gets done. Things are finished, checked in, tested, and we move on. This company has no shortage of money.

Neither was what David Graeber's new book calls a "bullshit job", but each is only half of an ideal job. I'm strongly reminded of an old dilbert cartoon.

I look forward to the baby boomers dying so we can stop considering capitalism as normal, and cash in the past century of economic and technological progrss to finally get universal basic income. (There may be some "let them eat cake" between here and there. I really really hope the current administration means we're getting the Robespierre/Napoleon part out of the way _before_ guillotines come out, but I have no clue what the future holds. The millenials seem to be waiting for the boomers to die before taking any other action. Gen X has been waiting for the boomers to die... since Reagan was elected, I suppose.)

(Yes, I am aware #notallboomers. As with #notallmen, it's not a useful objection. This ain't getting fixed while Racist Grandpa is voting with the dregs of the confederacy. Society advances the same way it always has.)

(No, "people will just breed to consume the resources" turns out not to be true: if you educate women and reduce child mortality rates without providing significant support for child rearing ala subsidies and free daycare and so on, population falls below the replacement level. This has happened in every advanced civilization around the globe, it's a significant problem and part of the reason we have more old people than young people, and only places like finland have been effectively compensating. The average family size is _less_ than 2 children. The US population has only been growing recently due to immigration, and that's rapidly declining because nobody wants to come here anymore. Not even to visit.)

June 2, 2018

I've been doing cleanup for a toybox release, although I'm flying to Austin on the 8th for a week off from work, so might bump the release back another two weeks so I can work on it while I'm there. Or maybe that should just be the start of the next dev cycle...

Anyway, I'm going through the github pull requests, and I'm finally taking another look at izabera's commit adding unlimited precision support to sort -n. I rejected it the first time as unnecessary for sort (single precision float is fine), but... what with the offered bc implementation turning into a mushroom cloud of politics, plumbing towards that end seems good and doing math as ascii strings (via long division and such) is dog slow but should be reliable and easy to understand?

June 1, 2018

I should have put a release out last weekend (3 months), but I was exhausted and Fade was visiting and I hadn't noticed the date. So I'm chipping away at it this weekend, which is another variant of "closing tabs". Cleaning up the endless backlog of half-finished stuff, and doing proper writeups.

I'm tempted to move the ps -o field list out of the "ps --help" output and instead have "ps -o help" show the list, but... I'm not sure it's an improvement? It wouldn't put the remaining text down small enough to fit on an 80x25 screen (although I've got a patch getting it a lot closer).

And once I get zcat promoted I plan to compress the help text (at least for non-single builds), and this would move it out of that. (Although when I've got the infrastructure maybe I could add some COMPRESSED_STRING() macro to append large blocks of text to the compressed help field entry and gimme a pointer to it using the same infrastructure... But there's already too much generated/ magic going on. Hmmm.)

May 31, 2018

Finally got a stable kernel forward-port at work so we're not working on a 4 year old kernel. Not to current (some intermittent bug in the flash code keeps eating the filesystem?) but two releases back (4.14) seems stable.

That was an _exhausting_ 3 weeks

May 30, 2018

Youtube's strategy is to show way more commercials to annoy people with music playlists into paying for a subscription. (No really, they explicitly said this.) My reply is basically "better dead than red", and I just turn the volume down when it does a 15 second commercial between each song. (There's a dial on my headphones.)

But WOW there are a lot of car commercials (and car insurance commercials) they're spitting at me. Which are hilariously ineffective because I know that both are going away in a few years when google maps adds a "ride" button next to the "directions" button when you selected a destination, and a self-driving waymo thing shows up. That is a service I'll subscribe to. If Google wants my video streaming dollars they can buy netflix or hulu. (I doubt they're going to buy amazon and the video is part of the giant Prime hairball anyway.)

In the meantime the car companies (and car-adjacent companies) are desperately trying to squeeze out a last few dollars before closing time. Self-driving means car sharing which means you need 1/10 as many cars. Yes even at rush hour: there's "be there at 6 am" through "be there at 10am", and at a fairly pathological average of half an hour to get there and half an hour to get back to pick up the next person (current reality's less), that's 4 cycles per car without "surge pricing" leading to the sort of carpooling people already accept for airport shuttles.

There's zillions of bigger issues (turning all those parking lots into extra buildings increases density which makes cities function better, no more gas stations, geico goes away, jiffy lube goes away, no more car loans...)

Sigh. The fix for youtube being stupid is to load music onto my phone via usb (the stuff I had on there went away in the factory reset last year), but they made it stop mounting as a USB stick a while back and I hate setting up the funky magic windows protocol thing they think we should use these days. I suppose I should learn to do it via adb...

May 29, 2018

I'm overdue for a toybox release! Oops. (Feb 24 + 3 months was last thursday.) As with all toybox releases, the big push of work (after finishing whatever I'm curently working on) is going through the git log to see what I did so I can write up the release notes, which always finds dangling threads I want to tie off before shipping, things I don't really have a proper test for, and stuff I did the first 1/3 of and could probably finish with just another hour or two of work (it's never another hour or two, it's days)...

At work I need an sh4 system booting to initramfs from real hardware, and getting a glibc buildroot to fit in the 4 megs of flash reserved for the kernel in the partition layout ain't happening (the uncompressed /lib directory is 7 megs with just about everything switched off).

So I threw a mkroot cpio at it, and although i got it to boot... the eth0 address is, which is the qemu static IP, and I need dhcp. Hmmm. I need to wget stuff, which means it needs to talk to the lan, which means an address in the dhcp range.

I switched buildroot over to musl and am rebuilding, that'll probably be small enough to fit. But collecting data for what an actually _usable_ mkroot needs. Shell, route, and dhcp. Heh, and strace didn't build against musl with buildroot's toolchain. (I have the start of an strace in toybox I should find a few hours to work on...)

Right, switching back to mkroot and adding dhcp there... it builds toybox defconfig so grabbing the dhcp out of toybox pending is awkward, but I'm building busybox with a config file so switching on the _busybox_ one is trivial... ok, try that.

My limiting factor these days is more "energy" than "time". Work pays quite well, but leaves me exhausted. It's still a step up from working for a startup that couldn't reliably pay me, still left me exhausted, and didn't have a defined schedule so I could say when I'm _not_ working and thus focus guilt-free on other stuff. (What I did for them was never _enough_, but I was spinning my wheels inefficiently for a lot of it because I was spread too thin and couldn't rest and recover.)

That said at SEI I was working on potentially world changing stuff there that advanced the _definition_ of the state of the art along multiple major axes, and here at JCI we're coming out with a software update for climate control equipment in large buildings, which already worked. It's worth doing, but it's the programming equivalent of washing dishes. The challenges are of the "oh that's baked on, how do we avoid scratching the no-stick surface" variety. It can require close attention and cleverness, but the problem being solved isn't novel or of much interest to anyone else.

Still, I'm being well paid for work people find useful which is neither unethical nor a legal quagmire. The AT&T set-top boxes we were working on back at Pace (the position before SEI) were supposed to spy on the people who used them (monitoring what shows you watched on Netflix, for example, so AT&T could try to sell its own video streaming services to you. We weren't implementing that bit, but we knew it was coming because they _told_ us). The worst you can say about the JCI boxes is "maybe we could sell them a little cheaper", and the customers aren't complaining. Seem quity happy, and have been for decades: they do what they're supposed to, to the best of our ability to make them do that, and do not act against the customer's interest in any way that I am aware of.

For a Fortune 500 company, that's kind of impressive. (Ok, JCI moved its headquarters to Ireland a few years ago to dodge taxes. There _is_ evil going on. But we're not being directly asked to perpetrate it yet. As capitalism in 2018 goes, that's high praise.)

May 28, 2018

Memorial day. Fade went back on the bus at 2.

Poking at the hello world on bare metal stuff again, trying to strip down a userspace one first, and building gcc -nostartfiles --static for a version with write(1, "Hello\n", 6); is still linking in hundreds of bytes of useless __libc_disable_asynccancel crap I haven't figured out how to disable.

On the kernel side, trying to do an ELF version of Balau's example without the crazy linker script, you can "gcc -Ttext 0" to move the text segment to location 0 (where the arm reset vector apparently lives, so execution starts there. Yes there's more interrupts that could happen later in the table but for these purposes I'm going to assume they won't and just put the hello world there. (You'd think that the interrupt table would be a list of addresses the processor jumps to, but no. It jumps to a fixed location where you put a branch instruction, and the spacing is enough for "branch instruction plus address". Why? Because the Arm designers thought that was a good idea.)

May 21, 2018

The kernel has "make savedefconfig" which does something a little like the plumbing I have, but the format's different. Miniconfig is the deltas from allnoconfig, and defconfig is the deltas from the symbol default values (many of which are on).

I like miniconfig because _conceptually_ it shows you all the important selected symbols. The ones that if you started from allnoconfig, you'd have to switch on to get this configuration. This "defconfig" stuff starts from an arbitrary base and makes arbitrary changes to it, half the knowledge winds up in each places and the result doesn't necessarily mean anything by itself.

But it's what the kernel guys are using, and it's already merged. Inferior but deployed.

May 20, 2018

Went off caffeine this weekend. Spent a lot of it sleeping, the rest failing to make any progress on any of the open source stuff I want to do. A bit underclocked just now.

I didn't figure out a way to remove mmap() from the elf parsing stuff in file.c, couldn't find my open windows for the rbtree stuff, forward porting buildroot's nommu arm qemu image from the 4.4 to 4.5 kernels hits the switch to device tree after which there's no console output whatsoever and I haven't figured out why yet...

Currently poking at putting the mkroot script in toybox, and as I'm converting it I realized... there's no need for the airlock script? All I'm compiling is 1) toybox itself (which I already had to be able to compile under the host to _create_ the airlock directory), 2) the Linux kernel. There are no other packages being built, and I'm removing the "modules" stuff that adds extras, hence no real need for the airlock?

I still want to build a kernel under the resulting system, but I need to add a "make" command first, and an awk, both of which are kind of a big ask. But getting a system to boot to a shell prompt should be as simple as possible, and adding native development tools to the result should be a tarball extract or similar. ("Or similar" because if the base system is in initramfs, the development tools may be bigger than the filesystem can hold, hence the symlink script aboriginal had to do that from the squashfs.)

I need to make an archiver that can create/extract squashfs like tar or zip files...

May 15, 2018

The System76 machine arrived. Nope.

Oh well, I tried.

May 14, 2018

Got another email about j-core from somebody wanting to participate in its development. I sent back the first paragraph of this:

Unfortunately I haven't had access to the or domains for months. Ever since the servers moved to cloud hosting my ssh key didn't transfer over, so I can't update the website or fix the mailing list. I also lost personal access to the "not cleared for public release" code at the start of the year, so haven't been able to do any development on that stuff either. (Or track the development being done in japan for proprietary uses.)

I did _not_ send back these paragraphs, which I typed and then cut out:

I arranged weekly calls with the developers for a couple months, and brought these issues up each time, but and each time, but nothing ever changed. Last week I _didn't_ arrange one to see if the project maintainer would notice its absence, and he didn't.

The problem is I don't work for that company anymore, and although I was still trying to participate in the open source side of things there doesn't really seem to be one when it doesn't suit the company's proprietary interests, which it hasn't recently. Maybe that will change in future, but so far anyone not working at the same company as the other developers has been second class citizens, so at best we're looking at something like Android or Sun's OpenOffice or Mozilla under Netscape before Jamie Zawinski resigned.

The classic failure mode of that kind of read-only project is that there's no point in outsiders submitting even bug reports upstream because the version they're using is a year newer than anything you have access to, and the design and development conversations all go through privileged insider-only channels you can't even read, let alone contribute to.

*Shrug* Jeff doesn't see it that way, you can always try asking him. I think [email redacted] still works.

Probably what somebody needs to do is take the last open-source tarball, check it into github, and do an open fork as a real project just ignoring what sei propreitary does. But the chance the private one _might_ release another drop has so far overshadowed the public one enough that nobody's seriously tried to bang on the "stale" version that's out there.

May 13, 2018

On Friday, work asked if I wanted to extend my contract here in Milwaukee through next October. (This job pays twice what SEI was paying _before_ we all went half-time, then SEI fell behind on those payments to the point I don't even remember how much they owed me. Yes, I kept the macbook, although I gave it away last month.)

I texted Jeff about this, since he's been making noises about Jam Tomorrow turning into Jam Today and maybe we'd be able to come back and work there again. Last night he noticed and responded "sounds like your work there is quite successful... We are on track over here also." When I said that stable and lucrative but not world-changing isn't necessarily what I want to do with my life, his response was "Up to you, of course."

When Jen stopped running the 5pm daily calls, I organized weekly ones with Jeff as a replacement. Last week I didn't organize one, to see if anyone else would notice. They haven't so far...

It's frustrating: I'm still trying to participate in j-core as an open source project. It would be nice if they'd release some source for me to work with! I'm told Niishi-san is still working on the VHDL, but I haven't got a login to the VPN anymore...

May 10, 2018

Ooh, liwakura on irc converted the PDF version of my old /usr/bin rant back into text.

Many moons ago I did a post on the busybox list, then a magazine asked to reprint it and I said I should check the claims against primary sources, and corrected things. (For example, I was right about the 3mb total drive space, but remembered an even split when reality was 0.5 megs for the fast drive and 2.5 megs for the slow one. Which meant when /home showed up it was another 2.5 megs, not another 1.5 megs, so the first unix development system had 5.5 megs total disk space, not 4.5 megs. Well _I_ care...) Then they sent back a PDF, with bibliographic links to the old documents on Dennis Ritchie's home page where I'd learned this stuff in the first place.

My old busybox post got linked from places like hacker news, and seems to have kicked of the spate of /usr merges (Lennart Pottering linked to it from the piece he wrote justifying the Fedora 17 usr merge, for example) but I was always slightly embarassed the "off the top of my head rant, got some of the details wrong" version kept getting linked to, and not the corrected version.

Yeah, that ship has sailed, but I should convert this to html and post it anyway.

May 8, 2018

I'm poking at buildroot's qemu_*_defconfig targets and seeing what architectures I can learn to add to mkroot from that. Which is why I sent this patch:

--- a/package/elf2flt/
+++ b/package/elf2flt/
@@ -29,4 +29,10 @@ endif
+       ln -s $(GNU_TARGET_NAME)-ld.real $(HOST_DIR)/bin/ld.real
 $(eval $(host-autotools-package))

To the buildroot list, but it doesn't seem to have wound up in the web archive. Spam filter ate it, maybe? Meh, I tried.

Some patches are sent just so I can say I did, not because I realistically expect it to be useful to anybody else. It's not a licensing issue (I'm not shipping binaries to anybody), but it would be selfish to keep the fix to myself, and it's good to be able to google for it again if I need to dig it up a year from now. But the buildroot guys actually fixing their stuff? Not something I really expect to happen, at least not promptly and not because of me. They didn't even regression test this infrastructure from when it _was_ working, or I wouldn't have needed to fix it. And no, I'm not jumping through the hoops and retrying and negotiating and reminding them to get a fix in. (I may have been somewhat burned by linux-kernel.)

The bug I hit is building qemu_arm_versatile_nommu_defconfig in buildroot, which dies when the standard "prefixed-elf2flt ld wrapper tries to call the non-prefixed renamed real linker" glitch elf2flt usually has hits, and the Wrong Fix is to symlink "ld.real" to the prefixed-ld.real that's actually there. Which the above patch does. A _proper_ fix would be to switch arm to use fdpic, but that's been stuck out of tree for years because gcc development is terrible and llvm hasn't noticed that nommu exists yet.

The other problem with adding support for more targets to mkroot from buildroot's qemu configs is that buildroot _explicitly_ doesn't support native toolchains -- as in they had support but then removed it, and when I asked in IRC they said it's not within what the project considers its current scope, which is "to make cross-compiling easy" and nothing else... Up to the point you can't build a buildroot system using the host toolchain, which is _sad_. You haven't got the option to _not_ cross compile. (I started putting together a patch and they said it would never get merged. Nuts to your white mice.)

Anyway, these QEMU targets haven't gotten regression tested a lot, so I had to fix the very first target I tried to build, and it's hit or miss since. (And my netbook is way too slow to build these in a useful amount of time.)

May 7, 2018

Ooh, Oligarchs!. Fascinating.

Backstory: In the 1980's Ronald Reagan scrapped the banking regulation that had prevented a repeat of the 1929 stock market crash and associated economic mushroom cloud for 50 years, on the theory it had worked so well clearly we didn't really need it. This pretty much immediately led to the the Savings and Loan crisis/bailout of 1991, followed by the asian economic collapse of 1997 (other countries copied our norms and put their money in our casino; japan invested in a lot of US real estate), then the dot-com bust in 2001 and a series of multi-billion dollar financial scandals under Dubyah (Eron, Worldcom, Bernie Madoff, etc) and then the mortgage crisis of 2008 where the wheels very nearly came off the global economy and the millenials basically got screwed in perpetuity. (Don't get me started on making student loans immune to bankruptcy. The Boomers have a _lot_ to answer for.)

By the way, the 2008 crisis was fallout from the 1980's invention of the mortgage bond, which was detailed in Michael Lewis's book "Liar's Poker", and then a quarter century later Lewis wrote a follow-up (he was present for lighting the fuse and still around for the bang) that became The Big Short (which is on netflix). Also some nice NPR coverage. So basically Reagan pulled the pin, the H.W. and Dubya bushes fanned the flames, and the whole mess exploded and got duct-taped back together. It is _so_ not "fixed".

Russia's economy wasn't exacty unscathed by all of this, but they worked on a diferent cycle: what really hurt them was the collapse of oil prices, because they're every bit as dependent on oil revenue as Saudi Arabia. (The top three oil producers are Saudi Arabia, Russia, the United States, and then there's a BIG falloff before you hit the #4 producer, which is Canada.) And Russia has ALWAYS been dependent on oil prices, since the days of the Czars a hundred years ago. The reason the Soviet Union collapsed in the first place was a significant decline in oil prices. (That's link is a longish thread about Russia, each tweet linking to a lot of further reading/watching.)

Russia can't feed anything close to its modern population from domestic production, only the westernmost part of the country is close enough to the mid-atlantic current to have a proper european climate; just like canada's population is clustered against its southern border with the USA, Russia's is mostly along its western border with Europe. The land to the east hasn't got the climate or irrigation to grow a lot of food, so they have to import it. But Russia doesn't manufacture much or perform a lot of services other countries really want to buy. That's why about 80% of Russia's international income is from fossil fuel exports, without which they can't feed themselves.

The implosion of the soviet union was like going through the Great Depression all over again, except the rest of the world wasn't going through it at the same time. They had more than a lost decade, their infrastructure eroded, institutions collapsed, lots of trained people emigrated and those who were remained didn't get the same education or experience. (You think the millenials had it bad after 2008, people in Russia were starving and freezing to death, plus an epidemic of alcoholism to make our current opioid crisis look mild.) They eventually dug out of that, but Russia can't play at the level it used to, so it's decided instead to drag the rest of the world down to its level. (The Gerasimov Doctrine, basically spending your enire defense budget on psi-opts delivered through the internet. Remember, Russia's current leader used to be the head of the KGB.)

And just as Lancelot was dragged down by "the old wound" in the camelot myth, the USA's original sin of slavery is ours. It was a major hurdle during the declaration of independence (most of the musical 1776 was about the issue of slavery), then it resulted in the civil war (there was a great Ken Burns documentary about that, it's on Netflix), and then the confederacy transitioned seamlessly into the KKK (founded by Stonewall Jackson's cavalry officer) and Jim Crow laws which MLK fought against in living memory. When the South South swapped sides after LBJ signed the Civil Rights act in 1963, the confederate rot switched from eating away at the Democrats (who were at least used to it) to the GOP (which wasn't, and the tea party chased out everybody who wasn't a reality-ignoring loon), and that's the vulnerability Russia exploited.

And that's the context for the paul manafort article at the start. Russia's a kleptocracy, organized crime running the show, with its fingers in a bunch of nearby states the same way the USA had the Monroe Doctrine. The New York and New Jersey real estate markets are more organized crime, which is where The Donald comes from. The Russian mob runs the Russian government, just like the US mob briefly ran this country (de-facto) during prohibition. We had alcohol funding our organized crime, they have oil funding theirs. The USA can feed itself without alcohol (at least until the fossil water under the breadbasket states runs out we're a net exporter of food). But without oil Russia hasn't _got_ an economy, and they are SCARED that the rise of solar and wind and batteries means their gravy train's coming to an end. The march of technology (and recognition of global warming as a real problem) is an existential threat to Russia's continued existence as a first world country.

That's why they hijacked our election. When the RIAA and MPAA were faced with the internet eliminating their role in music distribution, they lobbied for insane extensions of intellectual property law (ala the Digital Millenium Copyright Act). The fossil fuel companies went for regulator capture instead, except they're 1/6 the world's economy (if they were a country their economy would be the third largest after the USA and China), and to them "regulatory capture" means toppling governments and installing puppet regimes. Russia and Saudi Arabia already have the oil interests running the government, the USA was the only big oil producer that _didn't_. And now it does.

The rest of the world's economy only matters to Russia (and Saudi Arabia) when it comes to A) being able to afford to buy their oil/gas, B) being able to export food for them to buy. That's _it_. Anything else we do they'd like us to stop, because they can't compete with it. They got NOTHING except what they pump out of the ground, but that's currently worth enough to make them international players.

Solar/wind/batteries are killing fossil fuels in solid/liquid/gas order. Coal is toast and has resisted attempts to revive it. The switch to electric vehicles will decimate oil (and self-driving subscription fleets mean the new thing needs about 1/5 as many vehicles as the old so "time to replace the entire fleet" isn't the issue people once thought it was: most of the old vehicles will be scrapped, not replaced), and batteries mean solar and wind become baseload power taking out gas.

Russia and Saudi Arabia are terrified of this. They're trying desperately to slow if not stop it, but also trying not to draw _attention_ to it so their enemies don't invest more heavily solar/wind/batteries as a way of opposing them. The best way for the USA and Europe to fight back against Russia is to cut off the fossil fuel money maintaining their economy. That's why the Dorito keeps adding tariffs to solar panels.

May 6, 2018

Carl Dong gave $40 to my patreon and proved I _can_ be bribed into getting over it and putting mkroot back up. I also posted a few longish explanatory whatsises to the list. Carl's donation means the amount I'm getting from Patreon this month is more than I earn in one hour at $DAYJOB! (Woo!) It's not something I expect to retire on any time soon, but the concrete expressions of appreciation are really good motivation. (It's like flowers and chocolates, only I can spend it!)

He also emailed wondering if he should ping his company's HR department because I've talked about wanting to work on open source full-time, and his company has various exciting open source projects I could work on... Except that's what happened at SEI. I went to go work at an open source company, and spent all my time on _their_ projects (like j-core) instead of the ones I already have (like toybox). I've already _got_ open source projects I want to work on, being hired to work on "open source" takes time away from that just as much as any other $DAYJOB. They're never hiring me to do the stuff I already want to do, they're hiring me to do something else they want done that isn't already on my todo list. (That's why patreon is nice, it's "go work on your todo list". Get that done. *thumbs up* Got it boss.)

The job I'm doing now isn't bad. Right now at work I'm digging into new corners of Linux (currently migrating jffs2 to ubifs, which means I'm learning how to create a ubifs instance on the ubi layer for this flash ("mtdinfo -au", fun corner case: your boot can be rate-limited by the amount of console output you're spewing to the serial port...) But it's not my existing open source todo list. It still takes time away from the things I'd be doing if it was my choice.

Back before I got married I did high-dollar consulting gigs for a few months, then did open source in the multi-month gaps between them, often not looking for work again for half a year after the last contract because I made so much more than I spent (especially in the condo, which was cheap to live in), so I more or less wound up working half time and doing open source half time. Then I got married and Fade was kinda stressed by the uncertainty, so I got Real Jobs at Timesys and so on. Now Fade's in a doctoral program with a scholarship that covers her dorm room, and she's on anti-anxiety meds, and I'm kinda edging back towards my old consulting habits. Only thinking "if I keep consistently employed for 5-10 years and sell the house that neither of us are currently living in (Fuzzy's taking care of it), maybe I could retire and do this open source thing full-time."

Except "spend your life waiting to live your life" is something I've never been good at. Happy to work towards a goal, but there are limits. When I _get_ money, I tend to mail it to people who need it more than I do, which is the main reason I'm not rich. I've earned a good living for decades, wrote about investing for 3 years back during the dot-com boom, and paid off my student loans and cancelled all my credit cards permanently almost 20 years ago... but have never quite mastered the "accumulating" part of wealth accumulation (beyond home equity). I don't spend money on fancy stuff, I give it to other people who need it.

(I strongly suspect it's not possible to be a billionaire _without_ being an absolute bastard. At best, you're wilfully blind to the suffering around you. But then I can only justify retirement saving as a cross between self-care and not being a burden on others later, so I'm not sure I'm a good baseline for comparison here. It's not so much altruism as hardwired lack of self-worth papered over with years of work. Yeah, being "recovered" from depression is a lot like being an alcoholic in recovery. The gaping psychic scars are still there, thanks Dad, I just know how not to trip over or dwell on them these days. *shrug*)

May 5, 2018

And a weekend again.

Ordered a new laptop from system76, which might make it to Fade's while I'm there next weekend. (We'll see.) It probably won't work with the lapel mic either, but eh.

May 2, 2018

So today I'm trying to get a hello world ELF program to boot on hardware. This is sort of a strange complement to mkroot. I should explain.

Years and years and years ago I talked about a hello world kernel, which is a tiny kernel that writes the bytes "Hello world\n" to the serial port, then halts or spins. Various people have done it over the years for various platforms (I linked to one such effort in my sadly jetlagged "simplest possible linux system" talk), but nobody seems to have tried to make a generic-ish version for each board.

But having a hello world kernel for a board has some interesting properties: you can glue it to the front of a real kernel and fall through to make sure the bootloader has loaded your stuff and handed off control properly (which means you got the compiler variant packaging, endpoint, and load mapping right-ish). You can cut and paste the "spit out a string" code later into your kernel as a simple debug printf (we got here) arbitrarily early in the boot. If you're trying to add qemu support, it's a nice first target. And what I'm trying to do _now_ is get a hello world vmlinux image for the Turtle board so people who want to port otehr operating systems to j-core have a starting point. It would also help with the "make the qemu-system-sh4 first serial port work" effort.

And then eventually, I'd like to genericize the qemu vmlinux ELF loader to apply to _every_ target, so you can always feed a vmlinux to -kernel and not have to work out "what packaging should I be using for _this_ board". Unfortunately, the only vmlinux I seem to have working on qemu right now is powerpc, and although I can get a _kernel_ to work I haven't built my own vmlinux from a .c file that does it yet. It's being stroppy.

May 1, 2018

I updated my Patreon! Woo! I'm not even managing to post there _quarterly_. I really suck at this.

I have a "podiatry" directory in which I've been collecting scraps of podcast ideas for quite a while. What I _don't_ have is video editing software, or any sort of experience/skill with such. The "Linux Luddites" podcast used audacity, of all things, to remove the pauses and "ums" in people's speech. So maybe I could do it with a purely audio format, but I watch a lot of youtube ones with either animations or screen capture of programming stuff, and I dunno how to edit that part of it.

And it turns out that neither my netbook nor my desktop will work with the lapel mic I got, it needs to be powered and the microphone jack on both won't do that.

The lapel mic works with my phone, but it isn't noticeably better than the phone's built in microphone and the problem with recording on the phone while screencapping on the laptop is synchronizing the two; they tend to record at infintessimally different rates that drift away from each other over time. A clock being half a second per minute different isn't a big deal in playback or recording, but if the audio and video drift 3 seconds apart after 5 minutes it's way different.

So I'd really like the same program that's recording the video to also record the audio, so they stay in sync. I could capture video as a _camera_ with the phone, but I'm not trying to record my face, I'm trying to record my laptop screen with the terminal windows and/or web pages I'm talking about (I.E. the interesting part).

I suppose I could try to come up with video to go along with prerecorded audio? (That's how animation usually works. Hmmm... I've also pondered trying to give a storyboard to Fuzzy and seeing if I can bribe her into doing animation for me, but (A) she's a lot busier these days, and (B) that's harder to do when I'm in Milwaukee and she's in Austin.)

I miss when I wrote Motley Fool columns. They never told me what to write about, but I had regularly externally imposed deadlines forcing me to Ship Something, and that forced me to do imperfect work and get it OUT there, which is really important. (The 80% correct thing you have assembled in your head will be forgotten in a week, just getting what you have on hand written down and out there is often the sort of thing you look back at a year later and go "Wow, how did I ever manage that? I'm no longer that good, I suck for _different_ reasons!" (I've learned this is wrong, and to ignore it. It's in the impostor syndrome bucket.)

April 28, 2018

And almost two weeks go by unblogged. I went to Fade's last weekend, hurt my back doing laundry (didn't expect my empty laundry baskets to walk off if I left them alone for half an hour, you have to use a key to get into the _building_...) but it only bothered me for maybe 5 days? Better now. Not _that_ old (or overweight) yet.

While I was at Fade's the japanese remineralizing toothpaste arrived! ("Apaguard", Rachel gave it a "works for me" in a Rachel and Jun video about things you can only get in Japan.) I look forward to seeing how that works. (I mean, it's toothpaste. It works as toothpaste. Already tried that much. Whether or not it's effective at building new layers of calcium phosphate on my teeth via nanotechnology is the question. A quick Google says Proctor and Gamble bought the US patents to this technology years back and have consistently failed to bring it to market for 5 years now. As I keep saying, Technology advances when patents expire, not when they're granted...)

My toybox irons in the fire are A) restarting route.c from scratch, b) restarting sh.c in a new file, c) writing lib/arbys.c (rbtree code).

I figured out I need to start route.c over from scratch to staisfy the multi-table objection that got that command removed from Android. I need to clean up and promote that anyway because there are two commands left that mkroot is using from busybox, once I've replaced both I can yank busybox and have a toybox-only build script, then I can glue and modules/ into a single script and check it into toybox's scripts/ directory or something; still not sure where package downloads should go, maybe it needs its own "hermetic" subdirectory. (Or I could call _this_ dorodango. Yes I'm aware of the alumnium ones now, march of progress and all that.)

The other command is still using from buildroot is the shell, and I'm doing a fresh sh.c from scratch because I'm sick of being blocked trying to clean that up by tracing the loose wires off into tangents and reverse engineering my own code every time I sit down. The data lifetime rules have changed: my original pass at this was trying to use all the string data in-situ from wherever it came from, whether it was getline() or -c or a mmaped file. While this is very nommu friendly, it means we're writing null pointers into mmap() or argv[] data because execv() needs an array of null terminated strings, and don't get me started on substituting in environment variables. Tracking what lives where when you can't just strdup() one you exclusively own was WAY TOO COMPLICATED. So start over Not Doing That.

The third one is a red-black tree implementation I can use as a toybox dictionary, a rathole I went down when I started reading the mkjffs2 source to see why it was doing something funky and it's using an old fork of the kernel rbtree.c. Yes, I wrote the linux kernel red black tree documentation ages ago but as I said in the commit I was _asked_ to do that and it was collating various sources (a writeup, wikipedia, etc) that all glossed over important details about HOW and WHY you rebalance. I'm more comfortable with a balancing tree than a hash table for most of toybox's dictionary use cases, so this has been on my todo list forever anyway, and now I'm reading through that and drawing trees and trying to understand what the corner cases are, the undocumented input assumptions of each function and why it's doing what it's doing. The trick about using the low bits of a pointer as the color is simultaneously clever and obvious. So why is it masking &3 instead of &1 for a single bit? I think I found a bug in it already where it's leaving a node's parent pointer pointing at itself, but maybe that loop gets broken later? Dunno.

The real problem for toybox work is that my day job, which is tolerable and lucrative, totally eats my brain and I go home exhausted every day and get nothing done in the evenings. And they keep scheduling 8am meetings so I can't even get up early and reliably have those slots, a 6am alarm just about gets me to work on time. Work's a big fortune 500 cubicle farm where the average age of my coworkers is about 55, but it's literally paying me 4 times the reduced hourly rate I was getting at SEI (when the paychecks arrived), and I'm still paying down that home equity loan (and saved nothing for retirement the past couple years).

So I have weekends. When I'm not visiting Fade. But I'm getting a little done. Tired, but at least not paralyzed by stress.

April 17, 2018

What is this nonsense?

Author: Geert Uytterhoeven <<><>>
Date:   Thu Nov 30 14:11:59 2017 +0100

    tty: serial: sh-sci: Hide serial console config question

No, EARLY_PRINTK works fine on qemu-system-sh4, I've been using it. Stop breaking stuff please.

April 16, 2018

The reason I added getconf to toybox is the kernel build was complaining it wasn't there, although the "command not found" messages never seemed to break anything. But now that it's in, the kernel build is complaining that LFS_CFLAGS, LFS_LDFLAGS, and LFS_LIBS are unknown getconf arguments.

So I grep and git annotate, and the calls were added by this commit and it's too dumb for words. A kernel build was creating a dependency file larger than 4 gigabytes on a 32 bit host, and without special arguments couldn't read it.

Let's back up and list the ways this is stupid. A) solving the wrong problem, why is your dependency file over 4 gigabytes? B) nobody ever needed linker flags or extra libraries to enable LFS, it was a #define to tell the libc headers to use the new syscall and typedef size_t as 64 bits instead of 32 bits, C) glibc implemented the "Large File Support API" in 1997 over 20 years ago.

In 1997 you could already buy a 16 gigabyte 3.5" hard drive (the "IBM Titan"). By 2002 PATA (IDE renamed by SCSI bigots) had to modify its protocol to go above 128 gigs, and Hitachi shipped a terabyte drive 11 years ago. The old api isn't even implemented in musl-libc or bionic, there's ONLY large file support. (Yes even in embedded systems, a _small_ sd card is 4 gigs and they go up to 128g retail.) So still needing a flag to enable this in any version of glibc that's shipped in the past decade is INSANE.

And yet...

April 15, 2018

I've been editing and uploading old blog entries, but got stuck at November 11 for reasons I wound up editorializing about. Then the November 12 entry I left myself reads, in its entirety:

Hah. A recent discussion brought up which was a story.

With the obvious [TODO] item of telling that story. And you wonder why getting my blog up to date is so time consuming?

April 14, 2018

Linux on qemu-system-sh4 serial still has a broken serial console, due to qemu and the kernel pointing fingers at each other. Rich just pushed a pile of patches that did NOT fix the serial console, and seems to have washed his hands of the situation, so I'm trying once again to get the _first_ serial port working (stop skipping first port), and I've reminded myself why I haven't done this before.

There's an arm bare metal hello world. Getting that working on sh4 involves A) figuring out what -kernel wants (why isn't the elf loader universal?) B) figuring out what the two line write loop for the existing working port is.

Tried running it under gdb to see if I could get it to run known entry code, but mcm hasn't got a gdb in it. (Thought it did? Do I still need a prefixed version or did it start understanding all the targets in one yet?)

April 13, 2018

Finally finished and merged getconf, which I started working on over a year ago.

I have so much half-finished stuff in my tree complicating checkins of anything else that touches those files. It's a bit like my 8 browser windows with a hundred or so tabs in each. I _could_ use bookmarks but out of sight out of mind and there's no reasonable way to browse them. Most of those tabs are todo items of some sort, anywhere from "finish reading/watching this" to follow-up analysis or projects suggected by the content.

One of the reasons I grumble about basic income isn't just that anyone who isn't a 1% Boomer is horribly screwed by the current organization of the economy, it's that I would get SO MUCH MORE DONE if I didn't spent all my time working a day job, and I think it would have a much greater impact and help more people. Making it so anyone with a smartphone could do systems programming would have a greater positive economic impact on the country than porting high-end thermostats from Windows CE to Linux because CE was end of lifed. Or working on j-core. Or doing qcc. Or writing documentation. Or teaching. Or about 30 other things. But the current economic incentives say (quite strongly) to do the other thing...

April 11, 2018

Took an evening and sent yet another perl removal patch to the kernel. (Well it's a merge window.) The workaround to the orc unwinder bug is to rm include/config/auto.conf after configure. (It'll remake it from .config when you build the kernel, but the dependencies are wrong so it won't remake it just because .config is newer and has different data in it.) I could fix the kernel's build dependencies, but that would involve more interaction with the kernel community which I find unpleasant.

April 9, 2018

I caught up with some toybox work over the weekend, by which I mean finishing off some partially done things in my tree and getting them checked in.

Next up is getconf, a can of worms I opened over a year ago, and stopped working on because I had to fly to ELC to give my underprepared simplest linux system tutorial. (Which I should really redo in a coherent fashion someday.)

The problem is it's been long enough I don't remember what I was thinking, and have to reverse engineer it from the code I left and the blog entry at the time.

April 8, 2018

Why do we need Universal Basic Income? In 1840, 70% of the US population worked as farmers. By 2000 less than 2% did. We're automating away a lot of the remaining jobs. Not only can we afford it, we can't afford _not_ to.

The internet has rendered data reproduction and transmission basically free (the Pony Express and telephone Operator used to be important jobs, these days more people have cell phones than running water), the cost of solar panels and batteries is expected to continue exponentially dropping for _decades_ and is already cheaper than installing new fossil fuel alternatives (and installing new solar/wind/battery systems is expected to be cheaper than continuing to fuel and maintain existing fossil systems within 5 years), self-driving electric car fleets and drone delivery are redoing transportation (on top of the revolution shipping containers already caused starting in 1956), 3D printing's just starting to affect manufacturing, and that's not even talking about stuff like mail-order kit housing a century ago...

Economic production has fundamentally changed since the last time peasants did subsistence farming in western culture, the world _profoundly_ doesn't work like that anymore. Women used to spend the majority of their time making cloth (from nobles doing embroidery to peasants endlessly spinning, weaving, and sewing). Nobody can make a living at that anymore (except maybe "artisinal" pieces, I.E. selling it as a form of artwork like a painting) because better versions of the results are available en masse incredibly cheaply thanks to mechanized mass production, and this is true of a thousand different things. The world has moved on, many of the assumptions our society was designed around are no longer true.

"But where will we find the money?" Money is a social construct. The Gross National Product of the USA the year before the 2008 crash was about 14.5 trillion. The federal reserve printed more money than that to stabilize the economy after the crash. The real concern is inflation, although despite injecting a currently estimated $21 trillion into the economy (half again the size _of_ the_economy_) the federal reserve couldn't get inflation _up_ to their 2% annual target rate. (And yes, inflation being too low is a problem.)

Still, the conventional solution to inflation is to tax the extra money away. During World War II the top personal income tax rate in the USA was 91% (kicking in at just under $1 million/year in today's dollars), and the corporate rate was 50%. The top income tax rate was lowered to 70% in 1964, and then Ronald Reagan lowered it to 28% (causing the modern problems with both the national debt and 1% of the country having over half of all money). I.E. during the entire "postwar boom" the tax rate prevented the existence of billionaires, and going back to that would provide plenty of money for basic income. This is a recent problem, easily solved.

And a predictable amount of inflation isn't a bad thing for most people. Rich people hate it, but it's good for creditors. If you owe a bunch of money on a 30 year mortgage at 5% interest, but inflation's 2% a year, you're actually only paying 3%. Over the life of a 30 year loan, inflation could easily mean you're effectively only paying off half as much. If you owe $50k in student loans, 5% inflation pays back $2500/year for you.

The modern "finance" economy is based on creating social construct money out of thin air (and then fighting over it). Michael Lewis covered the creation of the "mortgage bond", abuse of which which is what led to the 2008 crash. But the sheer fiction of the modern economy goes far deeper than that.

A decade ago Taxi Medallions in New York City were worth $1 million each, an entirely artificial value placed upon a regulatory monopoly granting exclusive permission to provide a service. Except without the monopoly, a more competitive marketplace provides the service at a gross annual rate less than 1/10th what the medallion is worth. And app-summonable self-driving cars can eventually provide the service for a fraction of that. The actual service people _need_ keeps getting cheaper, and is approaching "too cheap to meter" the way netflix streaming has made video rental "too cheap to meter". How many videos are you allowed to watch per month? They don't put a limit on it, there's no point.

The "70%->2%" change listed above (decrease in the percent of the population engaged in food production) was a similar productivity revolution: centuries ago scientists discovered fertilizers by burning plants and analyzing the ash (on the theory that anything that didn't burn away was something the plant couldn't have gotten from the air in the first place), then refrigeration and tractors were invented, then Norman Borlaug did the Green Revolution (dwarf wheat and rice) _really_ caused production to explode wiping away the mathusulan concerns of the 1960's... and it's now to the point where forty percent of the food produced in the US is wasted and nobody cares enough to do anything about it. (Not "you left food on your plate and it got thrown out", but never made it onto anybody's plate. Every night at 11pm the hot bar at the grocery store 2 blocks away in Milwaukee is emptied into trash cans, enough to feed like 50 people, and that's not even worth _tracking_ in the modern economy.)

These days the only reason anybody goes homeless, hungry, or without internet access is because we made a choice not to give it to them. Just like we're _choosing_ to deny medical care to people, choosing to deny education to people (videotaped telecourses were available 30 years ago, then khan academy and crash course on youtube...)

The problem is capitalism and billionaires. Capitalists get rich by "cornering the market", I.E. it's not enough for you to provide more, you have to make other people provide less so you have a monopoly. Warren Buffett referred to this as a moat around a business. Capitalism is a mechanism for regulating scarcity, and in the absence of scarcity it creates it. (Despite the inherent "too cheap to meter" nature of the internet, for-profit corporations keep trying to charge extra. Given tools like intellectual property law, they use it to corner existing markets.)

Capitalists aren't just making up fake assets (this Banksy graffiti is worth $15 million dollars because I _say_ it is) and printing money to buy them, they're making fake jobs and hiring people to do them as a way of controlling people. Arguing against basic income by saying "people won't do work"... one of the big and increasing problems of our age is a shortage of _employment_, and people spend their free time doing work they find meaningful. (Each year two million people volunteer just for habitat for humanity. Add in "you can increase your standard of living quite a ways beyond mere subsitence before serious taxes kick in" and a labor shortage is not a real concern.)

Unemployment aside, lots of the jobs people are doing now accomplish literally nothing and the people doing them know it. Nobody can miss the work they're doing because they're literally not doing anything. Estimates are that 40% of all jobs serve no purpose, lots of the rest are in service of nothing (the janitors in the office building where everybody's a brand image consultant), and then entire industries like tax preparation that _do_ currently perform a real service could be entirely eliminated (the government knows what you owe, it already deducted it from your paycheck and HAS the money, we intentionally slightly overpay because we're all bad at saving and don't want to wind up oweing extra if we got it wrong, and if your tax filing doesn't agree with what the government thinks you owe you get an audit instead of a refund: I.E. the entire tax preparation industry, software and in-person both, has no reason to exist; it's a lucrative sinecure maintained by a guild/cartel).

The Baby Boomers grew up with all this as normal, but it was new with them. The rise of for-profit health insurance? It happened right before the boomers. The idea of moving out of the house when you turned 18? Unique to the Boomers. Ronald Reagan dropping the income tax rate from 70% to 28%? Boomers. (Note: high taxes make companies spend money on "wasteful" things like worker training and long-term research and development, because the alternative is "losing" it to the government so even small gains are much more worthwhile. Lowering taxes reduces investment, and instead leads to profit-taking because there's no penalty for cutting the company to the bone and pocketing the money.)

An awful lot of us are waiting for the boomers to die and designing the kind of society we want after their ironclad assumptions about what is normal die with them.

(Speaking of which, this and this were really good articles.)

April 7, 2018

Still on cp --parents. The problem I noticed today is that the old code was using basename() not getbasename(). The libc function can modify the string passed into it, which is terrible but works for argv[] because environment space is writeable, but I try not to do it because that changes what other processes see in "ps". (Several entries in /proc read the process's environment space live.) Busybox and chrome-browser have both had problems with that in the past, I'm trying to do better in toybox.

So I'm need to stop and think through the ramifications of the corner cases where basename() and getbasename() differ in behavior and whether the old code was using it thoughtlessly (probably because it predated getbasename) or if there was a _reason_ for it (in which case I should have left a comment). Probably the first, but I gotta do the exercise anyway.

Ok, according to the man page the corner case is basename("/usr/") will return "usr", so it trims the trailing slash. And if I don't do that then it will open and write "/usr//bin", and the only actual behavior change is that -i and -v would show slightly different output (/usr//bin instead of /usr/bin) and I think I'm ok with that.

This is bolstered by the fact the last commit to touch these lines predates getbasename (even under its old name) by a year.

Yesterday I mentioned an approach that's been on my todo list forever is basically "readlink -f" on both source and dest and failing if dest isn't under source. All sorts of stuff from tar -x to httpd should use that to constrain input or output under a directory.

Except it's not readlink -f, it's readlink -m, as in "mkdir sub; readlink -m sub/not/there" should return a path rather than failing because more than _one_ component at the end doesn't exist (yet).

But what happens if you do:

$ mkdir sub
$ readlink -m sub/none/../../../../fruitbasket

Because the ubuntu version is cancelling "none" with one of those .. entries. Which makes sense if we've cannonicalized the path up to this point, each .. corresponds to a single path level... ok, I can presumably do that too. The current xabspath() plumbing doesn't, but if I feed in -1 for the "exact" paremeter... heck, I can add -m to readlink. :)

April 6, 2018

Weekend again. (Well, Friday evening.)

My tree's cp.c has the start of "cp --parents" in it, so I took another look and tried to finish it last night (there's a pending feature request but it turns out to be fiddly). Since my question at the end of that last link never got answered, I'm probably just doing the simple thing until I get fresh complaints.

The new problem I hit is that "cp ../../../usr/bin dir" creates "dir/../../../usr/bin" which seems wrong. Did I mention the Free Software Foundation is terrible at designing software and tends not to think things through? They just slap new layers on top endlessly and grow software via accretion. It's kinda annoying.

The tricky part of all this isn't implementing it, it's figuring out what the correct behavior should _be_. Last time permissions were fiddly, this time constraining the output under the target directory is. (I don't hugely care if you follow a symlink in the target because that's pilot error, but the _source_ is more likely to be untrusted. Then again some sort of --constrain option to make sure all the stuff you create is under the target would be nice. I even have the infrastructure for it, just use xabspath() on both (it's the plumbing behind readlink -f) and strncmp. It's expensive, but quite reasonable to add as an option.

Speaking of options, I hate --longopts without shortopts. Not in the kind of commands toybox should be implementing, rm -r is way faster to type (and more unixy) than rm --recursive, and given that we've already got short options for almost everything needing to say "ls -l --fruitbasket" is inconsistent.

So I'm adding cp -D for --parents (create leading directories), and if I add a "constrain" option maybe it'll be cp -C.

April 4, 2018

It's adorable Twitter seems to think I'm going to stop blocking every advertiser that shows up in my feed. I don't have a Faceboot account, never programmed for Windows and wiped it off every machine I've ever owned, only ever used Horrible Retweets on other people's tweets complaining about Horrible Retweets, and still limit my tweets to 140 characters. I didn't drive for 6 years because I refused to pay an unjust traffic ticket (until a friend needed my help moving). I didn't speak to my father for 10 years after his divorce until my mother's _dying_wish_ was that I start talking to him again.

You can convince me to change. Circumstances can change. I try to _constantly_ reevaluate my positions and assumptions. New reasons to do things come up all the time. But if you try to wait for me to "get over it", when the cause of the problem is still there, I wait for you to die.

April 2, 2018

Built an x86-64 aboriginal image and... it's failing in the exact same way as m68k. Probably the ancient toybox fork it has checked out in downloads/ which means it's not m68k's fault specifically. So Laurent's qemu is off the hook there, which implies m68k is working. Cool.

The netstat thing is weird, the kernel file is giving the hex digits so they wind up in network endianness when read into an int, so I don't need to htons it (although: creepy). But reading it into a long and then typecasting the long * to an int * is still wrong on a 64 bit big endian system. Still, simpler fix.

April 1, 2018

Visiting Fade. It's Easter, everything's closed.

Lots of little todo items in toybox. I did a cleanup pass on netcat, which needed it after the nommu weirdness left it awkwardly hunched.

I also want to make it use generic lib/net.c infrastructure for stuff, starting with xbind(), ideally a version that looks at the sa_family field of its argument to figure out the structure size for itself so you don't have to pass it in as its own argument. This means searching the rest of the code for bind(), but while I'm there lib/net.c also has ntop() handling both ipv4 and ipv6 and returning a static instance of the looked up thing, so I should examine inet_ntop() uses too.

Which means finding that netstat.c function display_routes() is reading "unsigned long" values via scanf() from /proc/net/route and feeding a pointer to them into inet_ntop()'s second argument, which should either be a struct in_addr or struct in6_addr depending on the family constant fed into the first argument.

There's layers of things wrong with this: word size and byte order are both wrong and combine badly. The ipv4 address field is an int (4 bytes) but the ipv6 address is word salad (a structure wtih multiple fields because gratuitous complication).

This is why I do cleanup passes, and why I'm uncomfortable promoting commands I never got quiet time to go over.

March 31, 2018

Visiting Fade.

Huh, the qemu website got worse (in a fancy way) since I last checked it. I wanted to look up a link in the mailing list archive, and it's gone all style-sheety and I'm guessing "contribute" would be where that's hiding now... and there's an email address for the mailing list but no archive link, but hovering over the email address (which contextually says subscribe to this) gives me a mailman link, not a mailto: like the link test BEING AN EMAIL ADDRESS implies...

If I didn't already have a history with this project I'd go "this is run by the marketing department of Red Hat or IBM, no actual developers are involved in this project" and move on. It's the website equivalent of a content-free glossy marketing brochure that's trying SO HARD to sell you on the idea that this thing is GREAT and REVOLUTIONARY that it fails to give you any actual technical information.

Anyway, I'm still _subscribed_ to said mailing list and while once again flushing my active thunderbird folders to backup folders to work around thunderbird's horrible design, I saw that Laurent Vivier submitted some interesting m68k patches a few months back. I've been trying to get full m68k linux for 68030 or similar to run on qemu for years, and qemu never supported it (only nommu coldfire), but Laurent implemented most of the missing chunks years ago out of tree.

So I asked whatever happened to his qemu-800 stuff (because there are 51 branches in his github tree, and it's probably it's years plural since I last looked at it). He just pointed me at the one to test, so I'm trying to give it a go.

But mkroot can't build for m68k because musl-libc never bothered to implement support for the target, so I had to dig up my old aboriginal linux directory and build the last m68k image for that. (I had aboriginal linux booting to a shell prompt under his qemu fork once upon a time, although it would crash if you did too much with it.)

I just dug up an aboriginal linux m68k image and it booted through the kernel startup messages, gave me the "type exit when done" message echoed out from the init script (which means userspace was running fine)... and then exited immediately. Maybe a tty problem? Failure of oneit to launch the user shell? Sigh, this is more likely to be an aboriginal linux problem than a qemu problem, and I haven't touched it in a while. No idea where I left off. (Building x86-64 to see if it does the same thing.)

(Is the init code leaving the tty in nonblocking mode? Aboriginal's building something like a 2 year old kernel at this point, haven't got the context to debug without more shoveling than I wanna do, putting time into the wrong things...)

Still, Laurent's qemu fork seems decent. Now to just get -M q800 upstream into vanilla qemu so I could submit bug reports against _that_...

March 26, 2018

I should deal with the chrt.c problem building toybox with musl, namely the big #warning "musl-libc intentionally broke sched_get_priority_min() and friends in commit 1e21e78bf7a5 because its maintainer didn't like those Linux system calls".

Unfortunately the warning is true, and the problem is exacerbated by Rich's refusal to provide a #define _MUSL_ symbol you can probe for at compile time to fix up his lunacy. He's broken a bunch of syscall wrappers to always return failure, which you can ONLY detect at runtime and not probe at compile time when cross compiling. This means if you're going to provide workarounds, the code must always be compiled in, in every instance. Unless you identify musl-libc by process of elimination (it's not glibc, it's not bionic, it must be musl).

And he does this a LOT. Musl provides a broken fork() on nommu systems that always returns failure, uClibc simply didn't provide fork() on nommu so it was a build break you could probe for and work around. Here he only provides thread apis to implement chrt, a command line utility that does processes implemented in a package that never uses threads and does not link against pthread.

Rich has very strong opinions on how other people should program, and is willing to punish other programmers for not doing it his way. Since I will _never_ do it his way, I've given up on trying to turn musl into something real and am instead trying to build toybox with the bionic ndk. But meanwhile, musl is in the cloud space, and it's what mkroot currently has working.

March 25, 2018

Tired. I need about two days of recovery time to switch back into proper open source development mode and clear all the little tangents that accumulated over the week, but that's all the downtime I _have_ before the week starts up again, so...

Didn't get a lot done yesterday, my tab closing hit "zcip.c", which should probably be called something else but that's the name the busybox version I used at work is called, and it's so HORRIBLY DESIGNED that I really wanted to write one that didn't require a shell script to just Do The Thing. And there's an RFC on it, so it's not hard to figure out what it should do. So I opened a tab...

I wanted mine to autodetect the first wired interface if you didn't specify the one you wanted, so I copied the code from ifconfig.c (maybe it needs to go in lib/net.c later but get it working first and see if there's still commonalities afterwards), and... it's not finding eth0. My netbook has eth0, lo, and wlan0, and it's only finding two of those. Why? The ifconfig code is finding all three and it's using the same ioctls against a socket opened with the same flags? What the...?

It's one of these "I wanna stick printfs in the kernel" things (to see why it's making the decisions it's making) and I can't on my host kernel (not easily anyway), and I'm not wasting effort on mkroot right now and I dunno if it would reproduce this anyway (I've never set up a virtual wireless interface in qemu? Or maybe it's always skipping the _first_ interface...?)

Anyway, that ate my programming time yesterday. That and the fact my netbook is REALLY SLOW right now because it's swapping its guts out with all the open thunderbird reply windows and chromium browser tabs. I should really close tabs. And/or get a new laptop, but every time I do that it's spin the dice on what subset is supported by Xubuntu. (I have good luck with ancient obsolete crap because Linux has had 5 years to reverse engineer it and get support upstream. The current shiny stuff has never been properly supported. Possibly the PC world is less of a moving target these days since all the effort went to phones. Or I could get a chromebook, but you can't stick a terabyte of storage and 16 gigs of ram in a chromebook...)

March 24, 2018

Ooh, guilt. Guilt. Somebody emailed me asking where mkroot went _and_ signed up at the $5 level on my patreon right afterwards. Ummm...

Hmmm. How do I say "it's dead" as nicely as possible?

(Rifling through my open email reply windows I found the "last chance to submit a talk proposal for tokyo automotive summit and open source conferences", which coincided with the evening I took down mkroot so didn't happen), and the Google Open Source Peer Bonus Award Thingy sending me a "gentle reminder" to update my payoneer info so they can send me the $250 that goes with the award. Both windows are over a week old. I should spend a day closing tabs again...)

March 23, 2018

End of the "sprint" at work, meaning deadlines. I worked extra hours to catch up from monday (contractor, paid for hours not accomplishments, no vacation or sick days; can't complain because the hourly rate's pretty good). So I've gotten very little open source stuff done this week.

Friday: time to catch up on open source stuff! Starting with design work.

After the mess on the mkroot list, renaming the project "hermetic" would be gratuitously picking a fight with a large corporation. But _not_ doing so would be backing down from my legal rights in the face of shadows that _might_ turn into empty threats that _might_ turn into a battle I could almost certainly win.

So I took the project down, because I don't like either option. Now it doesn't matter what it's called, and armchair lawyers can't empty chamberpots over it again.

I gave it a week to stew, and it turns out somebody did notice it was down. I should send him a tarball. (As with busybox, the work I did is out there open source, I'm just not continuing it as a separate project that would need a name. Yes, there are still scars from SCO and bruce. Work is assigning me to work on systemd configuration, my "this is not fun" bandwidth is accounted for these days, thanks. My open source work is either because I enjoy playing with it or because I'm trying to accomplish something specific.)

There were two near-term use cases for mkroot: 1) better toybox test suite, 2) natively compiling stuff. Making the second work without plugging the gaps with busybox is significantly more work than the first, but the biggest single blocker to either is the lack fo toysh. Then again I don't need every toysh corner case to get something that can run the init script and toybox's scripts/ and so on...

Ok, if I'm going to merge a subset of into toybox as scripts/ or similar, I should merge the modules/kernel and modules/native scripts into the main file (those are the only part that can't build natively under qemu with some control-image plumbing, even the dynamic libraries can be added by rebuilding libc natively). I no longer need it arbitrarily third-party extensible if it's not going to be its own project.

I also need to rip out the busybox build. (I've kept an air gap between toybox and busybox ever since I stopped maintaining busybox, originally because of Bruce contamination, then because license. I've contributed things like toybox patch _to_ busybox, but nothing comes back the other way except bug reports.)

Ripping busybox out of mkroot leaves a largeish hole, although all of those commands are also in "make install_airlock" so it's part of an existing todo list. That said, I can't bring networking up without a "route" command (toybox's is in pending), and it really needs a command shell to run (which can handle the init script). The rest is there for native builds (wget and tar most obviously).

March 19, 2018

Fade was sick yesterday, and I seem to have it. Remarkably short incubation time. Possibly something we ate? (Not really defined symptoms, just general aches and fatigue and blah.)

Taking the day off from work, hanging out at Starbucks and trying to apply pending toybox patches and such. (Haven't got the brain to do design work, but I can close some open tabs when somebody sent me a patch I just haven't gotten around to testing.)

I think I know what to do with mkroot: it's two scripts. I can put them in the toybox scripts directory. (Or maybe the kernel one goes in scripts/modules or something, dunno. Little more tweaking needed.)

The airlock step from aboriginal linux already went into toybox, I might as well put the build script in there. My short-term goal with this is to try to come up with a toybox test environment for commands that run as root and need a defined system environment to produce consistent testing results, so...

(The alternative is pretty much abandoning mkroot, because I'm not putting it back on github as its own project. Just no. I wouldn't even bother merging it into toybox, but I have work on system building as one of my patreon goals people are contributing to, so I should get unblocked on fixing the 4 kernel regressions since 4.14...)

March 18, 2018

Fade headed home on the bus, and I camped out at a coffee shop to try to get some programming done, but it closed like an hour later because sunday.

The guy who submitted bc clarified that not only should I not touch it, but I won't hear from him again until it's perfect. So I removed it from pending.

Ugh. Trademark crap on the mkroot list. I just don't want to step in that swamp. I don't want to back down from a fight and change my plans because of the _shadow_ of a threat either. Really, I don't want to work on mkroot anymore if it's going to have people nominally on its side dragging legal clouds over it. Why bother publishing the code, just do my own development and feed the results to the android guys. Except the Linux guys _already_ don't listen to me when I send them patches for issues other people in the embedded compunity poked _me_ about. (As their designated "willing to go into that sewer to meet with the morlocks" person.)

And then I was too depressed in the evening to submit any of the talk proposals. (Especially ones about mkroot, if I'm abandoning the project. But even the ones on other topics... I'm back to "don't want to travel, don't want to interact with the Linux community"...) I never heard back from Jeff anyway (despite multiple emails and a day and a half of waiting), and flying to tokyo's a long way for a conference if I can't hang out with the j-core guys. (About like my visit to LCA in Tasmania: it was nice, but I'm not going back because it's just too much travel.)

Going to bed early.

March 17, 2018

Tomorrow's the last day to submit talk proposals to the tokyo open source summit. A couple minutes of pondering came up with:

Beyond uClinux: nommu in 2020
  - Nommu processors are the single celled organisms of the computing world,
    which means we're surrounded by billions of them without noticing.
    - 256k ram in qemu? ROM kernel, nommu, xip romfs/cramfs.
    - j-core example
    - musl, toybox
Building the simplest possible Linux system.
Building Android under Android.
  - updated version of 2013 talk
  self-hosting hermetic system build
Hermetic Linux
Android on the Desktop

This year it's colocated with the automotive summit, because the linux foundation loves diluting the audience for external conferences it inherits and trying to erode their individuality. I think they got the idea from the way Ottawa Linux Symposium died when the kernel summit forked off and half the attendees stopped coming, this way they can kill conferences they inherit from outside and replace all the "Not Invented Here" conferences with ones they fully control. Hasn't quite worked yet, but they keep trying.

Anyway, I poked Jeff about doing a presentation together with him on designing a new GPS implementation from scratch. Might be of interest to the automotive half of the thing...

March 15, 2018

Fade's spring break was this week, so she's coming to visit this weekend.

Thinking about adding nm because this wandered by and reminded me it's not a big deal (we've already got file parsing ELF), and I'm wondering if there should be a "development" menu in the toybox menuconfig for this? I'm already adding ar, because ar is needed by dpkg. But the main use for ar is static libraries (ala libc.a). So... is it a development command or not?

There should be a name for "decisions that are hard because the stakes aren't large enough to clarify the issue". Sigh.

Let's see: nm -A = names, -a = all symbols (debugging syms), -D = dynamic, -f FORMAT? (-f posix), -g = exports ("external only"), -u imports ("undefined"), --defined-only (not -u), sorting: defaults alphabetic, -n numeric, -p unsorted, -r reverse. And the one I use all the time (as does make bloatcheck) is nm --size-sort.

Not a huge amount to implement, really...

March 14, 2018

Happy pi day! I have an alarm set for 1:59 so I can eat the tiny pie I got from the grocery store. (At 26 seconds after the minute. Update: did the thing.)

I've decided to rename my mkroot project to "hermetic", since the point is to do hermetic builds (specifically hermetic system builds). The phrase "hermetic build" appears to have originated within Google and mostly be used in there, but that just means it's not currently got a lot of collisions when you search for it outside the Google bubble.

I need to cut a release with the 4.14 kernel before tackling the pile of breakage in 4.15 and newer, and QEMU is barfing on arm64 because of a QEMU bug (VIRTIO is announcing itself strangely and confusing the 4.14 kernel), which has a fix, but it's not merged in the version I have built. So trying a QEMU 2.11.0 release build to see what works there...

March 13, 2018

Someone commented that 0BSD doesn't require preserving copyright statements in the code, and I typed up some reply text I should probably record here. (I need to do a proper licensing writeup. I've done like a bunch of partial ones over the years.)

Modern copyright law doesn't require notification, hasn't for decades. The internet's pretty good at finding plagiarism, regardless of copyright. And these days authorship info goes in source control, not inline in the source.

The bigger issue is the warantee disclaimer: it's a historical relic. People give medical and legal advice on blogs and youtube channels, but we still expect disclaimers on software because "we've always done it that way". (It's like the White Knight in Through the Looking Glass giving his horse anklets to keep sharks away: if Alice pointed out there are no sharks on this hillside, he'd take it as proof they're working.)

It's there because licensing for PC software started when mainframe developers ported their stuff to smaller machines. The only commercial software back in the mainframe/minicomputer days was written on commision, bought and paid for _before_ it was written. The shrinkwrap software market didn't exist before 1977 because the unit volume wasn't high enough.

The PDP-8 was the best selling computer each year from its introduction in 1965 until it was replaced by the Apple II, and in its entire production run the PDP-8 sold a grand total of about 50k machines. If you wrote PDP-8 software at the _end_ of its production run, there _might_ be about 50 thousand total customers for it in the whole world (if all those machines were still in use and in the market for new software). And it took over 10 years to accumulate that many, and it was _the_ biggest software market in existence.

The Apple II sold almost that many in its first two years. The first machine to sell a million units was the Commodore Vic-20 (introduced in 1980 and selling 600k units in 1982 alone), the commodore 64 was introduced in 1982 and immediately outsold the vic-20 (it did about 2 million units/year through retail outlets like Sears). The IBM PC took about a year to sell its first million units (1981-1982), and so on.

Microcomputer unit volume growing orders of manitude larger than mini or mainframes changed the _nature_ of the market. Suddenly you could make a piece of software and have a million customers waiting for it. You could write _then_ sell, which was new. (And so was the concept of piracy: if you wrote software for the PDP-6 there were a grand total of 23 built _ever_ and MIT owned most of them.)

That's why Apple got the law changed in 1983 extending copyright to cover binaries. (Bill Gates had been complaining about it since 1976 and even addressed congress in 1980 (yes there's audio), but he almost never managed to accomplish anything himself, he was all about capitalizing on other people's work.)

Back in the mainframe world software was custom tailored to each installation, and each machine cost millions of (inflation adjusted) dollars. If you caused an outage you were easily costing the customer five figures a day, and big companies that could afford a mainframe had lawyers on staff. So you BET you had huge legal boilerplate full of disclaimers and indemnification in that context. Plus, when it's custom software your customer is the first (and likely only) deployment: they _will_ find all the bugs.

So when people started selling shrinkwrap PC software into the microcomputer market, they just copied existing practices, including giant disclaimers that made no SENSE in the new context. But if you ask a lawyer "are we safe from being sued" the answer is NEVER yes. (No lawyer will ever tell you NOT to CYA. They'll tell you to stay in the basement covered in bubble wrap: it's their _job_.) And the "two guys in a garage" operations starting up in the micro world (Gates, Kildall, Bushnell, Jobs, Gariott...) happily copied what the big boys did so they'd look grown up. Keep in mind they were writing "software licenses" _BEFORE_ Apple vs Franklin, when they were clearly unenforceable. But convincing people to go along with what the fancy words say is 90% of the law anyway, and it _looked_ convincing.

Apple v Franklin wasn't the end of software makers paying to change the law, of course. The phrase "shrinkwrap licenses" comes from 1980's license terms saying that by breaking the plastic shrinkwrap around the box you'd agreed to the license terms inside, except the license was printed on a paper _inside_ the box so you couldn't read it until after you'd opened the box. Back before the internet, "informed consent" (the basis of contract law) was literally impossible in the context of a retail purchase. Then software makers paid a lot of money to lobby for passage of the DMCA to retroactively make that legal around 2000, after doing it for years. (And then there's first sale doctrine, lots of software makers insisted the software was a _lease_ not a sale, until in 2011 they changed the law again to end first sale doctrine for software. Proprietary software's kind of a nightmare these days.)

The real reason 0BSD has the warantee disclaimer is its goal was to start with a large block of existing, approved license text, and make a single small change. The warantee disclaimer is established context that provides a security blanket for large corporations. Nothing in 0BSD requires you to copy that disclaimer into your own derived work: it's a disclaimer made by the _existing_ author(s). So if you lift a couple functions and use them in another context, that new thing can be under literally any license. That's the point of public domain equivalent licensing, frictionless relicensing so the license becomes irrelevant. (I.E. Universal donor.)

I remembered the Apple II figure because I wrote about it long ago. It was a lot easier to find my old Motley Fool articles before the database migration that labelled all the really old articles as written by "Motley Fool Staff". (They tend to still be signed -Oak at the bottom, since that was my login handle on the message boards, which is where they hired me from...)

March 12, 2018

Back from visiting Fade.

I merged bc. (More drama! Don't care.) Sooooo much cleanup to do.

Fiddling with the new ndk: -llog is only there for dynamic links ( exists, log.a doesn't), the library probe logic isn't using LDFLAGS right (that's my bug). A static hello world built with the NDK's clang segfaults. And I've confirmed it did the same with gcc: it's that hello world built with _bionic_ segfaults on my netbook. Illegal instruction before making it to main(). Great.

March 10, 2018

Hanging out in T-Rex Cookie with Fade, trying to catch up on the giant backlog of open source todo items, and on IRC there's bc drama. Of course. So graff posts in the #toybox channel to complain about what gdh has done ("have been kicked out of the project and my work replaced by someone who appears fraudulent" was said in a public channel), Thalheim pms me to try to give me context, and I haven't even reviewed this code yet.

Either way, they're submitting their bc to both toybox and busybox, but I'm likely to clean it up extensively whenever I get around to to looking at it again. Will they then marshall those changes over to busybox? Will busybox changes come back to toybox (along with license contanimation because they didn't clear it properly?) Sigh...

A few days ago Rich tweeted a link to Fabrice bellard's new arbitrary precision math library, which is available under an MIT license. That's not quite the toybox license, but close enough he might allow me to use it under 0BSD if I asked him? (Long ago he gave me permission to BSD license is tinycc code. I should ask if 0BSD counts, but I haven't had time to poke at qcc in ages...)

Meanwhile, the longer I put off dealing with the "gmail hates dreamhost" issues the worse it gets. The bounces turned into a mass unsubscribe when I let it time out, and I've been meaning to fix that but dreamhost has no https on mailing list administrative access (it's an unencrypted http page you log into with a password that lets anybody mess with your list), so I'm really reluctant to poke at it (kinda the same way I feel about entering my credit card info into any web site ever, I'll do it but with great reluctance and tongs at arms length, and then breathe a sigh of relief if it doesn't _immediately_ manifest as a disaster)...

But I've now waited long enough that 3-4 messages have posted to the list (and didn't go out to those subscribers), and I don't want to reply to them until I've dealt with it. Let's see... save the batch of unsubscribe notification mails into a directory, mass sed them to get the names out into a file, copy that to the clipboard, pull up the insecure admin web page's mass subscription thing, paste the list there, and subscribe.

Obviously a far easier interface than giving me command line access to the mailman server. Sigh.

One message was about the new android ndk. Back on Feb 14 I did an x86-64 api 26 build, but libc.a was old. I should see what fixes got updated. Since I can't build this from source myself yet out of their git repos, I basically test and send bug reports, then wait for the next -rc tarball to be posted. It's a slow process, especially since my context switch to respond to each new NDK, after at least weeks of not working on it, is "whenever it makes it that far up my todo list".

I want to work more closely with the Google developers on stuff like this, but not being a Google employee I honestly don't know how. They share a cafeteria. I don't.

March 9, 2018

On a bus to visit Fade in Minneapolis, trying to get some programming time in. Kinda bouncy, but otherwise...

Always weird little design issues. Doing ar, I sort of want to use copy_tempfile() out of lib.c except if the old file can't be opened I want permissions 664 on the new file (default permissions for a new archive in ubuntu's ar), and mktemp() does 600. And it's not really accessible, but setting up and calling mktemp() from the host myself and then doing lstat() and copying the permissions over is duplicating an uncomfortable amount of infrastructure for a tiny behavior change.

Sigh. Only 4 users of copy_tempfile() so far, I should add an argument and have it be "0" when you want to error_exit() if the stat() fails. (Which is fine for things like patch.)

March 8, 2018

And the GOP has destroyed another american institution. "The downfall of toys R Us can be traced back to a $7.5 billion leveraged buyout in 2005, when Bain Capital..." which was Mitt Romney's company "loaded the company with debt.... The company's massive interest payments also sucked up resources that could have gone toward technology and improving operations."

Meanwhile, on the "Capitalism is destructive" front...

March 7, 2018

Sigh. I'm sure I had more blog entries (several) but my netbook rebooted and the forest of .notes-2017.sw? files vi left behind were uninformative.

I spent some of the time on mkroot. Finally got the mailing list migrated (laboriously cutting and pasting the aboriginal list subscriber base to the new list one subscriber at a time) a few days ago, and I'm trying to get that to work.

My toybox "working on this" stack includes tftp, deflate compression side code, implementing ar (because dpkg is an archive.a containing a pair of tarballs)...

March 6, 2018

I should probably respond to the Linux kernel's new license enforcement statement but I'm not sure I have the energy. Intellectual property law needs to go away, it's something society has outgrown, and they know it, but Max Planck said "science advances one funeral at a time", and the same has been said about math in webcomic form. Unfortunately it's true of the law and society in general.

We have to work out what we want society to look like, then start describing it and how and why it should now work that way, and moving the overton window in that direction.

Capitalism creates scarcity. That's what "cornering the market" isn't the only way capitalism does that, but suing farmers over patented crops because pollen blew into a neighbor's fields is evil.

There are so many articles about extending IP law past expiration, from patenting minor variants on existing drugs to raining down lobbyist money to change the law. The open hardware clones of arm and x86 stopped at the last versions too old to be compatible with anything currently in use, and where then abandoned under legal threat if they took one step further despite the technology they'd be copying having shipped more than 20 years ago. (The only reason j-core exists is Jeff was willing to call Renesas' bluff, do his homework and win a lawsuit if necessary. Plus Renesas wasn't selling SuperH anymore due to politics, so didn't budget a large legal leg-breaking budget to defend it by bankrupting people regardless of the merits of the suit.) Here's a classic article on how big players shake down small players for patent royalties and proving you didn't infringe is no defense because they can simply bankrupt you with endless litigation if you don't play ball.

Then add in regulatory capture, submarine patents (where a patent application is eternally amended to defer issue, and then they decloak and start suing people when somebody else starts making money in this area, and the patent expiration clock only starts ticking when enforcement starts.

The USA's early success involved a significant lack of enforcable IP law until the 20th century, china's rapid growth was triggered in large part by completely ignoring foreign IP claims... NOT doing this turns out to be way better for the economy than doing it. Yes you have to work out how to pay creative types so they can afford to do it, but "basic income" is a way better solution than resticting distribution of the results in a whole lot of areas. Software wasn't copyrighted at all before 1976 (Berne Convention), and the copyrights didn't cover binaries until 1983 (Apple vs Franklin), by which point Unix was 15 years old.

The Baby Boom is dying soon, and there's no reason for the rest of us to continue their cultural assumptions. The top tax rate was 91% from World War II until 1964 (and even that only lowered it to 70% where it stayed until Ronald Reagan lowered it to 28% and started society's modern domination by the 1%). The 91% tax kicked in at just under $1 million/year in today's dollars and prevented billionaires from existing and thus being a problem; american's global dominance happened under that tax regime and going back to it would be a good thing. (High corporate taxes drive investment: they'll spend money on things like R&D and worker training if it would otherwise be taxed away. If they get to keep it, the owners pocket the money through stock buybacks and cut the actual business to the bone.)

We need instant runoff voting. We need universal basic income (Which almost passed under Nixon but the democrats killed it in the senate for not being _big_ enough, once again snatching defeat from the jaws of victory). With the green revolution, vertical farming, exponentially advancing solar+battery technology, self-driving electric vehicles becoming transporation as a service, container housing, and AI expected to automate away a lot of the remaining jobs, and many of the jobs we _do_ have being pointless social constructs...

The end of capitalism's a bit like the end of monarchy. It's something people living under it couldn't imagine doing without, but once it's gone it seems simultaneously silly and horrible...

March 2, 2018

Meanwhile, on the advancing solar power technology front, solar microdots.

March 1, 2018

So LWN had an article about a company's software license compliance training program, "including the GPL, Apache, BSD, and MIT licenses, in easy-to-honor checklist form", with "a decision tree for choosing a project license", and they ran 2 day workshops with "short lectures, lighting talks, and small-group breakout sessions".

Beyond complying with existing licenses, they got internal pushback to open sourcing this company's own software due to "Missed revenue generating opportunities", and I'm really, really, really looking forward to the end of capitalism.

Back when phone companies charged for domestic long distance calls, the metering was the most expensive part of providing the service. It cost more to measure and bill for it than it did to provide it. There's an old tension between making something "too cheap to meter" (as people thought nuclear power would make electricity back in the 1950's) vs "cornering the market" and charging money to a captive audience who hasn't got the choice _not_ to use your product or service.

We see this in the internet service providers perpetually wanting to de-commoditize the internet and charge per megabyte. AT&T most recently led the charge for this with their cell phone customers back when they had a monopoly on iPhone sales.

This article once again makes the mistake of referring to "the GPL", which hasn't existed since GPLv3 fractured copyleft into incompatible camps. "The GPL" was a response to capitalism, and the old problem "you become what you fight" is on full display. GPLv3 provides a restrictive license regime full of obligations and the promise of giant legal headaches if you screw up, because its proponents can't conceive of NOT doing that.

This is why I did Zero Clause BSD, a public domain equivalent license designed to be familiar and nonthreatening to large corporations and government entities, without burdening individual developers with strange obligations like 37 copies of the same concatenated license text they're not allowed to clean out. ("You are not meant to understand this, just do it.") I've been testing the Android NDK and the file android-ndk-r16b/sysroot/NOTICE is 63,075 lines of concatenated license text. The "stuttering problem" is on full display.

When the article talks about how BSD and Apache are GPL compatible I look at the stuttering problem and go "define compatible"...

So much wasted effort. Existing software should be too cheap to meter, it's development of _new_ stuff that costs. You amortize the development cost over a small number of years. Back in Charles Dickens' day copyright only lasted 20 years, he outlived many of his own copyrights but kept writing. (He didn't even have patreon.) It doesn't matter what the license is when the copyright has expired. I googled for "software asset depreciation schedule" and the first page of results had 4 marked ads at the top, 4 marked ads at the bottom, and the 10 hits in between were all ads. The point I was trying to make is companies that deprecate software as an asset probably aren't taking more than 20 years to do it, telling the IRS that it's worthless after that point.

If copyright did still last 20 years, and software development was fully amortized over the period and its asset value fully depreciated, then Windows XP would be out of copyright in 2021. It's still what most Windows users _want_ to use, if the ReactOS guys had the source they could fix the security issues. Netscape's long dead. Microsoft suing Linux devs over FAT patents recently provided no value to anybody. That sort of thing still _being_ under copyright is obscene. "How will I still own and control this after I'm dead" is a bad question.

Sigh. People are pushing in the wrong direction, as usual. Applying a licensing regime to something infinitely replicable is not what future generations should be doing at all. But society only frees itself of bad assumptions when people who've lost the ability to question their own assumptions die off.

February 28, 2018

The "80/20 rule" is important. Clay Shirky taked about it in one of his videos, but what I'm thinking about here is you should be able to get 80% of an operating system kernel for 20% of the effort (code/complexity).

I'm looking at xv6 and thinking it's maybe 5-10%, not 20%. To get a mkroot kernel that you can build Linux under, you need 2-4 times as much code as xv6.

I want a simple kernel, libc, compiler, and command line. Capable of rebuilding under itself and building Linux From Scratch under the result. So far I've been doing this _with_ Linux, but Linux added perl, libelf, yacc, and bison as new hard dependencies in a 3 month period.

Assuming you can get a tinycc+cfront capable of building gcc or llvm, they won't run under this because there's no mmap(). (I'm not entirely sure musl will run under it either, because it just uses sbrk() and Rich thinks that's terrible and he tends to remove support for interfaces he aesthetically disagrees with.)

A simple kernel would be single processor, use a simple "generate a software interrupt" system call method, and implement posix system calls. Alas there isn't a system call to get a process list or process data, so it probably needs /proc too. It should have mmu support.

Really, it probably looks a lot like linux 1.0 circa 1994...

I spoke to Jeff on the phone last night and he recommended I look at OpenBSD (which I'll never do, Theo and Stallman are in about the same bucket to me), and netbsd (which is hard to care about since its own developers keep declaring it dead. (Then microsoft puts money into it for a while to revive it, which is not really an argument in its favor.)

February 27, 2018

The linux kernel's going crazy enough that I'm pondering trying to put together a simple build environment with a different kernel, and then build Linux under the result?

That's kind of what I was thinking about with qcc, that it could be the bootstrap compiler you get up and running on a new system, and then natively build llvm or gcc or whatever your final optimizing compiler was with that. (And then rebuild the optimizing compiler with itself, etc.)

Which brought me back to xv6, and I'm finally properly reading the xv6 textbook rather than just skimming, and... it hasn't got mmap. That's kinda important.

I'm not sure what the minimal set of things a kernel needs to support a build environment _are_, but I don't think gcc can run without mmap? (I remember a broken mmap on arm in 2006 prevented gcc from working, back when wanting to natively run a compiler on arm was a crazy thing for me to want to do.)

If you can't build a bigger environment under your tiny/simple one, its not useful as a bootstrap, is it?

That said, I'm building the kernel with a stripped down miniconfig, so I'm in a better position to figure out "minimal" than almost anyone else. But there's an awful lot of stuff you can't configure out, allnoconfig has hundreds of syscalls...

February 25, 2018

Heading back from Fade's. This bus had outlets. Reading my deflate code, and rereading the rfc. I should separate out and promote gunzip, and finish the deflate compression side plumbing to do gzip. Then "zip" and hooking it to tar should be simple in comparison. Plus compressing the --help text.

Left my headphones behind. Can't get new ones because Fedex is evil and nothing else in walking distance of downtown milwaukee has yet revealed an offer of headphones.

February 24, 2018

Cut a toybox release at Fade's. Otherwise offline.

February 23, 2018

Heading to Fade's. The bus does not have an outlet, not online much.

February 22, 2018

Trying to test some stuff, I have "sleep" processes I can check with ps and pgrep and so on, but there could easily be legitimate "sleep" processes running on the system so what I wanna do is "ln -s sleep xiphoid" and then run ./xiphoid 30 so I'm pgrepping for a sufficiently unique name in my test script. The problem is if it's toybox, it doesn't know what "xiphoid" means and won't act like sleep.

So toybox_main() needs to follow symlinks to find a command it recognizes when basename(argv[0]) is unknown. I think one level should be enough?

February 20, 2018

Still trying to clean up toybox for a release (logger's being stroppy, turns out it won't build under musl for reasons Rich has acknowledged are a musl bug, but in the meantime I should inline the priority and facility name tables).

Which brings up an interesting question: how do you list the options? For "kill" there's the -l option, for ps they're listed in the help text (which makes that command's help text outright unwieldy).

The way qemu handles this is "qemu -M ?", which is cool and obvious (you see it once, it's easy to see what it means and easy to remember)... except ? is a shell wildcard. So 99% of the time it'll work, but every once in a while you'll have a single character file in the current directory and it'll get substituted in and break.

February 19, 2018

I read an article on founding a small start-up that _stays_ small that reminded me of an excellent earlier article comparing the growth strategies of Amazon vs ben and jerry's, which relates to something Eric Raymond said (back before he went crazy) about how open source software works like a dentist's office or a law firm: you have two or three professionals and some support staff, and that's your business. "The law offices of Dewey, Cheatam, and Howe, LLC." The kind of business where professionals sell their expertise does not naturally scale up and become a multi-billion dollar business. "The next Microsoft" doesn't work that way.

Capitalism likes cornering the market, and extracting revenue without doing work. Forced routing through a toll bridge is an obvious way, doing work once and charging for it a million times is a minor variant. When Intel required multi-billion dollars to build a fab and thousands of people to design the next processor, sure: amortizing the huge start-up costs over an enormous production run made sense, and was a natural moat around the business. But open hardware? In its entire history the j-core processor has had commits from a little over a dozen people, and most of them didn't work on it at the same time. And the reason they _could_ do it is the patents had all expired, so they could implement in the shadow of prior art. Doing that _again_ would be a question of sufficient individual expertise, not how much money you threw at it or what IP restrictions you could fence off.

What we need is financing tha can support a dozen people indefinitely, on of Brooks' "surgical teams" from the mythical man-month, so we can do the work. This is not something the market is set up to provide funding for. (Corporations used to, but not so much these days.)

Late-stage capitalism giving way to basic income would potentially cause a great increase in certain kinds of engineering productivity. We know this is true because open source, wikipedia, the blogosphere, youtube. Creative people WANT to make stuff, they earn money to afford to be able to do so. Take money out of it, production efficiency increases.

February 18, 2018

Walked to the Avalon Theater to see Black Panther. It was good. Along the way I found the closest McDonald's (about 20 minutes walk south of work, I.E. _away_ from my apartment).

And I found a gas station that stocks the Monster Muscle cans I've been trying to find since the convenience store near work ran out of them. (I bought 5. It also has the discontinued banana version, which implies it has a stock of them from a while back, who knows if it can still get more after that. The nearby convenience store says its distributor stopped carrying any of the monster muscle.)

I like them in part because while fasting, and caffeinating heavily as an appetite suppressant, it's a source of protein for only 200 calories. But it doesn't seem very popular in the wider world. (Given that the chocolate and strawberry flavors were terrible, I'm not that surprised. But the vanilla's good.)

Fuzzy posted a photo of Fade's banana bread recipe to slack, and I should write it down: 1 1/4 cups sugar, 1/2 cub butter, 2 eggs, 3 very ripe bananas, 1/2 cup buttermilk, 1 teaspoon vanilla, 2 1/2 cups flour, 1 tsp baking soda, 1 tsp salt. Heat oven to 350, grease 2 loaf pans, mix sugar/butter, add egs, add bananas buttermilk and vanilla, beat until smooth, stir in flour baking soda and salt until just moistened, bake for 1 hour.

February 17, 2018

Two's complement is the obvious way to do signed integers. The C++ guys are trying to standardize what posix basically already did. (Hint, if your compiler is _required_ to support two's complement behavior, that's how it's going to treat all signed integers. Implementing _two_ sets of signed integer behavior is deeply crazy.)

And yes, this means the compiler optimizer guys who have made integer overflow Undefined Behavior and intentionaly break it are crazy, and you have to work around their crazy by typecasting pointers to longs to compare them (typecast numbers to unsigned, do the math, and typecast them back). But you had to typecast to char * to get byte offsets anyway, so using long instead isn't as big a deal. (Yeah yeah unsigned long, but again if signed integer wrap is two's complement it works either way.)

And since this came up again, one reason you can't have one program linked against more than one libc is each libc instance maintains its own heap, so you malloc() from one and free() into the other and Bad Things Happen. Note that statically linking your program and then using dlopen() to pull in a library that links against a dynamic will _also_ usually do this.

This is why shared libraries load other shared libraries, and dynamic linking is recursive. If you try to statically link a dynamic library to eliminate external dependencies (my library doesn't need to pull in zlib, I created my .so with --static), subtle badness can happen.

(And this is entirely in C! Do not open the C++ can of worms. There lies madness. And very smug people with stockholm syndrome bragging that they have 20 years of experience soaking up punishment and believe they have learned where every sharp edge is and C++ isn't so bad as long as you don't make it angry or make direct eye contact or mention the existence of petunias, and it works very hard and is under a lot of pressure and always apologizes when you get out of the hospital and really given how it was raised it's doing the best it can and if they left it would only start drinking again and it's getting better, why the most recent stint in a standards group fixed all the problems for sure this time, it must have...)

February 16, 2018

Catching up on the Linux Kernel Mailing list, using my standard procedure: Go to a sane lkml archive (even when sitting down at a new machine, "" is easy to remember and the archive is one link away from there), then pick a recent week I haven't read, search for "torvalds" in the thread view page, and read each message from Linus. (Right click open in new tab, because clicking on the link directly loses your place in the text search.)

Sometimes I read the message Linus was replying to, or the entire thread he participated in, and I sometimes read messages from other names I recognize or click through an interesting title I notice. But "I read all the posts Linus made that week" is my definition of "done".

I tend to do this in batches, because it's easiest to read completed weeks (you don't have to check back to see if new posts have shown up), although the browser highlighting will show you which ones you've already clicked through. (The advantage of reading the current week is you have the option of participating in the discussion, but I generally haven't got the bandwidth.)

This time there are a bunch of links to bookmark, such as an anecdote about why C+ is such a pain to compile. Here's an updated statement about 32-bit support. And the kernel developers seem to finally be ready to take llvm seriously.

Some of these I should reply to even if it's a few weeks late, such as this one about perl, which I should reply to with a perl removal patch for the new nonsense that showed up in the arm build. (Maybe with reference to my original perl removal series.)

And I should reply to the compiler version bump to 4.5 to say I was doing a 4.2 compiler but stopped, and it was about licensing issues, but llvm is feasableish now, so...? (Really musl-cross-make started providing something usable so I stopped doing the sisyphos/necromancy thing.)

Oh no. Flex and Bison. Have they really started requiring flex and bison to build the kernel? Yes they have. Sigh. Hopefully when that stabilizes they can do the _shipped trick they did to make menuconfig not need flex...

February 15, 2018

I wrote up Too Much Detail when responding to a mailing list message about whether or not getprop should be in toybox, and decided not to post it, so saving it here instead:

The original idea was hermetic system builds, and when I started that was defined as providing enough command line tools to build a development environment that could rebuild itself and then build linux from scratch under the result. (On the assumption that once you've built LFS, you have enough infrastructure to build any arbitrary additional package under the resulting system.)

So any command line utility needed to build the kernel, compiler, libc, or toybox itself (well, busybox back then), if it wasn't a tool provided _by_ the kernel, compiler, or libc, was therefore something toybox had to provide. Then LFS needed a few more things.

Except... glibc was providing getconf and iconv and uClibc/musl weren't, so toybox needed those to make a bootstrap circle with those libraries. And Linux From Scratch would often have to build its own version of a command which toybox already provided, but which turned out to have more features required by later package builds, so toybox grew those features so you could skip the otherwise redundant package builds. (You could still build them to regression test, but they shouldn't be _required_...)

And of course when you're at a shell prompt on a toybox system, if you haven't got "ps" you really miss it even if nothing in any of the builds ever used it, and you'll miss command line history and vi and less... most of the missing stuff was posix or LFS commands so a triage of those produced lists of stuff we should probably have. (Some of which was stuff like "cal" that was easy to do, even if it wasn't hugely useful.)

And then there's an actual automated build system itself: it's going to want to wget source packages, apply patches, extract tarballs in the three major formats... not being able to do that is noticeably limiting to _implementing_ an automated hermetic build.

And if you're building natively, the init script in the resulting system needs to be able to mount stuff...

All that fed into the current roadmap: list of posix commands, list of lfs commands, list of things needed to build LFS, and "requests" which are largely filtered by "easy to do given the infrastructure we've already got".

There have been two major changes this decade:

1) I used to assume I'd be bootstrapping Linux distributions (red hat, debian, gentoo) under LFS. Now I want to boot AOSP under a pure toybox system with no additional GPL packages required. (But if there's an existing non-gpl version that the toybox system can build, I don't necessarily need to provide one.)

2) the compilers rewrote themselves in C++ for some reason, and nobody's done a modern cfront in a while, so the toolchain needs C++ support not just C or you can't bootstrap llvm under the result. Figuring out whether this includes crap like the boost libraries (or they can be built natively on the target system) is a todo item.

February 14, 2018

Another attempt to build toybox with the Android NDK, I downloaded the current version which is -r16b, and ran the make standalone toolchain script for --arch x86_64 and --api 26 (which is the version in Android O).

Unfortunately, while toybox compiles with that it doesn't link. Doing CROSS_COMPILE=/opt/android/x86_64/bin/x86_64-linux-android- CFLAGS="--static" make 2>&1 | sed -n "s/.*undefined reference to '\(.*\)'/\1/p" | sort -u | xargs (as you do) yielded:

__android_log_write facilitynames getgrgid_r iconv iconv_open prioritynames sethostname stderr stdin stdout

Which is rather a lot of missing stuff. The annoying part is all that stuff was found in the headers, or else the compile never would have made it to the link stage. So the NDK's headers are providing stuff the bionic static library isn't.

February 12, 2018

Capitalism is a mechanism for regulating scarcity, and in the absence of sufficient scarcity capitalism will create it.

That's why I'm worried about what capitalism's going to do to solar power. They're trying to turn "buy solar panels once and have electricity for 40 years" back into a rental model with unnecessary middlemen charging you a monthly fee. Not just a mortgage to buy the panels, but "we install panels on your roof and then charge you for the electricity". It's stupid and stuck in the past, but there's a lot of money (and entrenched assumptions of powerful people) behind it.

The internet promised freedom and equality and a lack of scarcity. You could endlessly copy digital information so paying for copies was nonsensical. But then capitalism cornered the market so you have to put your video on youtube if you want t-mobile not to count it against your monthly streaming data cap, and youtube has automated DMCA takedown requests. And yes I wrote about it at the time, but didn't expect people falling for facebook's ring-fenced private property a _second_ time (after AOL did it the first time), or the political damage that would do when capitalists learned to leverage the flaws in their business model to defend the next interation of "Leaded Gas", "Tobacco Industry" from the end-stage lawsuits as we figure out how many people fossil fuels are killing each year and reality threatens a profitable business model.

Capitalism works like linear algebra trying to maximize income and minimize cost, and plugging a zero into any of the numbers breaks the model. They'll use up all the free air and water and chop down the forests and slaughter the buffalo until it becomes scarce enough the price of obtaining one more goes up above zero. Wasting a million gallons of water to save a penny is the "right answer" according to capitalism. This is a problem, and an increasing problem as time goes on.

People who chant "There's no such thing as a free lunch" as an article of faith don't believe in them and don't trust them when presented with them. And truly devout capitalists poison free lunches to make you buy lunch from them.

Star Trek didn't just predict flip-phones and voice recognition, it predicted post-scarcity society that had done away with capitalism. You can't HAVE post-scarcity without capitalism, CAPITALISM CREATES SCARCITY. It's called cornering the market and it's how you get rich.

February 11, 2018

Made it home just in time to park the car, hug fuzzy and pet the cats, then head out to the airport for my flight back to Milwaukee.

Well, that ate a weekend.

February 10, 2018

Driving back to Austin to drop off the car.

February 8, 2018

Google testers keep bringing up code purity issues I mistake for real problems.

This week it's "address sanitizer complains about a read from malloc(0) return value which is never used", and I thought that meant bionic was returning NULL instead of glibc-style zero sized heap allocations, and that the code was following a NULL poiner when it shouldn't.

So I added an extra NULL test to the variable initialization... But bionic _isn't_ returning 0, it's doing the same thing glibc and musl are doing, returning a valid pointer into the heap that's a zero-sized allocation, which is safe to read from (even at the end of heap it's followed by at least sizeof(pointer) internal heap data) but the results are meaningless. And we already weren't using the results when the count was zero: so the code worked in practice, but not in theory.

I.E. I fixed it wrong because I thought they were complaining about a real issue rather than a theoretical one, and then had to do a seperate fix to mollify their bug dowsing rod. But what they were complaining about was reading uninitialized memory, which _isn't_a_thing_. The kernel gives us initialized mappings, always. Program may not properly track what's happened to it since but if we're not saving the result it doesn't MATTER.

The high water mark of this is still the ls valgrind thing where I had a for loop adding numbers from two arrays and saving the total in a third array, and then conditionally using the entries in the resulting arrray. I didn't care whether or not the fields in the first two arrays had been initialized because the code that used the _output_ did that, and valgrind freaked at reading uninitialized data (to do addition, then never using the result), so I had to add a useless memset to make them happy. (I could have tested whether each field was used in the calculation loop as well as the display code, but it would have quadrupled the size of the code for zero ultimate behavior change. It was faster to just do integer addition on the cache lines we'd already faulted for the adjacent fields we were using, then decide what was needed at display time.)

The ONLY way that change could ever affect the behavior of the code is if an out of control optimizer decided to damage code to punish access to "undefined" (but correctly mapped) memory. If it does what the code SAYS it would always be right.

There are plenty of things that work fine in practice but not in theory. That generally means your theory is wrong.

I suspect thinking that C++ and C are the same thing leads to treating C as toxic waste only to be handled with full hazmat protocol, rather than "think it through down to what the hardware is actually doing". (Because in C++ you CAN'T think through to what the hardware is doing, it's got layers of gratuitous abstraction that change behavior annually without you ever changing your code.)

February 7, 2018

Decided to drive the car back to austin this weekend rather than next weekend (of course more snow is coming), and Southwest screwed up so badly I've cancelled my "Rapid Rewards" account.

I tried to use my $144 flight credit from cancelling my return trip from ELC last year (since Jeff flew me straight to Tokyo from the west coast), and the site barfed because it expires tomorrow. Called customer service and it turns out you have to _complete_ travel by the expiration date, not just book it. (That would have been good to know.)

They suggested I call another customer service to see if they could make an exception (since I'm trying to book travel for sunday to fly back from Austin to Milwaukee, it's an extension of 3 days). And after half an hour on hold the customer service drone tried to charge me $100 to _not_ help me. ("All I can do" was buy a six month extension, and then they started into a long explanation about how this wouldn't let me apply credit to the Sunday flight, but instead I would be mailed a new voucher. So why bring it up?.)

Meanwhile Expedia found a flight that's cheaper than Southwest would have been _with_ the $144 applied, so it's not actually a loss. But I am "this company needs to die" levels of disappointed in them right now.

Maybe I'll forgive them after six years. (Historically when my vindictive streak is triggered it tends to last an even decade, but... I don't really care about southwest enough to hold a grudge? They're just useless and incompetent. They lost the "most airlines suck but this one is good at getting a plane full of people from point A to point B for an obvious price" special regard I held them in, and since they _don't_ sell through the same site others do, why bother to go look at them specially anymore?

Hmmm, way back when I read articles on the history of the company and how they felt they were competing with ground travel rather than other airlines, so had to keep improving even when they already had a huge competitive advantage. (This is why I described them as proof you could get the contents of a greyhound bus airborne.) Ah-ha! Their founder retired in 2008. Add ten years for his residual influence to attenuate and all their policies to be replaced by corporate drone du jour industry average BS, and yes. Southwest is Just Another Airline now.

(Same thing happened to IBM after Lou Gerstner left. Sam Palmisano followed the roadmap Gerstner left for 5 years, then stepped down at the end of it, and handed off to a clueless corporate drone. Neutron Jack Welch at GE leaving has been bad for that company too. Corporations try very very hard to treat humans as fungible (any unique individual is a liability, you must break up with them before they can dump you and find an appropriately bland beige robot), and it's a total lie. Steve Ballmer was a boring punch-clock villain, not an Evil Mad Scientist like Gates. Apple with and without Steve Jobs is a totally different company, I'm aware that Ives is doing the Plamisano thing of running out the clock of residual inspiration Jobs left but it doesn't change the "10 years later you're kinda screwed" timeline. A conglomerate without a good CEO steering is in for a hard time.

It's a pity my 3 waves talk at Flourish never got the recording published (despite me poking them about it repeatedly for over a year). I should try again...

February 6, 2018

I didn't get to write the initmpfs patch over the weekend, or the new perl removal thing for arm, or something to fix the ORC dependency, or updated initmpfs stuff, or nearly as much toybox stuff as I wanted.

This weekend Fade was visiting. Last weekend I moved into a new apartment. Next weekend I drive to visit fade so she can use the car. The weekend after that I drive the car back to austin so it's not getting a $40 ticket every time it snows (on street parking isn't valid during DPS operations, I need to move the car... to _where_?)

And at the end of the day, after two 20-minute trudges through snow to do 8 hours of porting legacy code in a cubicle (Ubuntu in a vmware window on a windows machine with outlook) I'm too tired to do much. And I can't do the "get up early and program before work" thing because they schedule 8am or 9am meetings 4 times a week, setting the alarm for 6 is barely enough time to make the 8am meeting.

I'm hoping that _next_ month I actually get a weekend to myself.

Oh well, at least there are no cats. So I'm getting a _little_ done.

February 5, 2018

Politics makes me angry because there's a lot of "that's not the real fix" going around right now.

Any time you have a "two party system" your politics are broken. First past the post voting needs to be replaced with instant runoff and more of a parlaimentary system.

Capitalism served its purpose: it regulates scarcity we no longer _have_. Most of the scarcity we wrestle with these days is _artificial_, created by cornering the market and protecting entrenched (outdated) interests. An economy where 60% of the population is full-time farmers is very different from 2% farmers, and we have not adjusted. We still threaten people with starving and freezing to death, while 1% of the population collects 90% of the output.

Between solar power with battery walls, self driving cars, the green revolution, vertical farming, container homes, internet on everybody's cell phone... most scarcity is pretty darn _optional_ for first world populations right now. Yeah the future is "unevenly distributed" but a "moon-shot" style deployment (ala FDR's Tennesee Valley Authority) could get new insfrastructure everywhere in under 10 years. We've done it before, more than once! But instead we subsidize oil companies by billions of dollars each year so they can turn around and spend it on lobbying to keep the subsidies going. We spend more on defense than the next dozen countries combined, decades _after_ winning the cold war. The third world is likely to leapfrog the first in things like rooftop photovoltaic and TAAS ("transportation as a service") because they haven't got existing infrastructure to replace, so they're not throwing good money after bad maintaining expensive legacy infrastructure. (They already did this with cell phones.)

Bullshit jobs. Universal Basic Income. Billionaires cornering the market, the 1% at Davros... These are radical positions the same way that abolition and women's suffrage were radical a century ago. They are major societal changes whose time has come. (Even a lot of racism is economic scar tissue that continues once the original reasons are long forgotten. See also african slavery and the demand for malaria-resistant plantation labor 300 years ago...)

Unfortunately the old geezers on top of the current pyramid are terrified of change and will attack anything that challenges the status quo. Society advances when old people die. I am _so_ looking forward to the end of the Baby Boom. But then I'm generation X, waiting for Boomers to get out of the way is _our_ defining shared experience...

February 4, 2018

Dropping Fade off back at the Greyhound station, we walked to the Stone Creek coffee shop across the street from the bus place to hang out with laptops for a bit.

In theory the greyhound station is as far from work as my apartment, just in a different direction (west instead of north). That's part of the reason greyhound might be a better option than driving, I could go there friday after work and get on a bus to Minneapolis, then back sunday night.

In practice, work isn't open on sunday, and neither is the Pita Pit we were navigating to as halfway point. Milwaukee, outside in February, is really really really really cold, and a half hour walk in it is a lot less pleasant than a fifteen minute walk. Even with a break in the middle (which turned out to be at potbelly subs, which was open and warm). Fade may be used to this, but I'm not.

Finally got some time to poke at pending toybox issues. I noticed the crc32 command in ubuntu (in both the new ubuntu 16.04 I set up for work and in my netbook's 14.04, "dpkg-query -S $(which crc32)" says it's in the libarchive-zip-perl package which is disgusting but apparently commonly installed. My crc32 logic can spit that out, it's "toybox cksum -HNLP", so a simple NEWTOY(crc32) with NULL arguments and crc32_main() that sets toys.optflags |= FLAG_H|FLAG_N|FLAG_L|FLAG_P; before calling cksum_main()... and it doesn't work... because I forgot FORCE_FLAGS. (Using cksum's flags when cksum is disabled, they get zeroed unless forced.) Ok, now it works.

So I can trivially provide this as a new toybox command with just a couple new lines, except A) if there's no file ubuntu's crc32 exits immediately instead of reading from stdin (I'm gonna call that "their bug"), and B) it only prints the filename when there's more than one argument. That's a design decision: grep works that way, sha1sum doesn't. What's _inconsistent_ is that cksum always prints the filename.

I already added -N to cksum (the ubuntu one has no arguments, mine lets you select endianness, pre/post inversion, and whether to include file length in the crc). Teaking -N not display the length either makes sense. having -N also only print the filename if there's more than one argument is less obvious, but adding a seprate option just to disable that is kinda silly... (It's another one of those "the difference is too small to have an obvious right answer" things.)

Oh, another difference is that crc32 always outputs 8 bytes of hex data where mine won't include leading zeroes. I think mine is wrong, and I should fix that.

Hmmm, the "print name or not" logic is actually a little more subtle: cksum doesn't print the filename when there is no filename (zero args, reading from stdin). The ubuntu one does, toybox doesn't yet but should. So if crc32 decrements argc by one, then the logic matches up... EXCEPT then we depend on optc being signed (because optc == 0 becomes -1) and while it _is_ I'm uncomfortable with leaving open the possibility of optc changign to unsigned at some point in the future and breaking this. (Well, test suite entry should check it and regression test should catch it, but in the meantime it can be if (toys.optc) toys.optc-- which would also work with unsigned.

February 3, 2018

Hanging out in Tiny Apartment with Fade. Introduced her to the nearby grocery store, which is pretty much all I'd found in the area. Finally tried the Gyro place around the corner, which is pretty good.

I've been caffeinating pretty heavily during the week, and not having any during the weekend, so there were some unexpected Attack Naps on the new air mattress.

February 2, 2018

Fade's come to visit, trying out the Greyhound bus from Minneapolis to Milwaukee. Given the security theatre at the airports, the bus takes about as long as planes do, and it's got more legroom, wireless, and an outlet to charge a laptop. So we thought we'd give it a try.

My plan was to pick up my car after work, drive to the greyhound station, get a little programming done at the coffee shop across the street until her bus came in, then drive her to Target to get an inflatable mattress for The Tiniest Apartment. (I've been sleeping on the two stacked sleeping bags I brought with me in the car, and given how hard the floor is it's still noticeably unpleasant, so I needed to do that anyway.)

Problem 1: Milwaukee may be a very walkable city but _driving_ through it, in the snow and slush and a layer of salt congealed on the windshield and covering all the road signs and lane markings, is No Fun At All. I managed to turn the wrong way on a one way street _three_times_. (Also, half the streets are two way and half are multi-lane one way and the lane markings are identical even when you _can_ see them.)

Problem 2: her bus was delayed by 2 hours leaving minneapolis, for reasons I'm still unclear on (they had to find a new pilot).

Problem 3: The coffee shop across the street closes at 7 and I got there at 6, with Fade now expected to arrive at 10. Not worth setting up, really.

Wound up going to Target myself, then going on a Quest For A Food Place That's Still Open to bring her a dinner-like item. Downtown kinda switches off after work. (Luckily the McDonald's near the Target is 24 hours, and grilled chicken snack wraps are almost like food.)

Got zero programming done, though.

February 1, 2018

Stopped at a second starbucks to redeem my Free Birthday Thing, and they don't do it either. Something about corporate vs franchise stores. Gave up and uninstalled the starbucks app.

January 31, 2018

A message on lkml fiddling with initmpfs wondered why it checks that you don't set root= (I.E. "as there must be a valid reason for this check...).

Backstory time!

I didn't want to switch rootfs to tmpfs all the time because it uses very slightly more resources, and if you're overmounting it with a fallback root= filesystem anyway those are wasted. It's a tiny waste, but it would be there on every system, so the check.

The _proper_ check would be that you have an archive to extract into initramfs: if you're extracting an archive into initramfs then you're using initramfs as your root filesystem, and thus making it a tmpfs instead makes sense.

Unfortunately, for years the default output of was three lines or so that created a /dev directory and a /dev/console entry. It was meant as example code, but when you didn't specify initrams contents it wound up getting called with no arguments and the build would create a tiny (150-ish byte?) cpio archive with /dev/console, and gzip it up. So initramfs would have a /dev/console in it, and then get overmounted and ignored.

And then the init/main.c logic grew a _dependency_ on this /dev/console. When opening stdin/stdout/stderr for pid 1, it basically called the open() syscall in the new process context with /dev/console, before pivoting out of initramfs. It worked because it was there, and then when it STOPPED being there (because I pointed out the default output and they fixed it) your initramfs wouldn't have stdin/stdout/stderr so they added a gratuitous mknod in initramfs context.

This feeds into the devtmpfs_mount patches, where right now there's a kernel config option to automagicaly mount devtmpfs when the system comes up, which ONLY applies to the fallback root= and not to initramfs. So I have a patch to add support, which is necessary if you create an initramfs by pointing the kernel souce to a directory of the initramfs contents as a normal user: It's the simple straightforward thing to do, but doesn't automatically add /dev/console and you can't create the device node as a normal user.

While I was there I cleaned up the kernel config stuff so you can tell it all the curent user's files should belong to root in the initramfs. Why nobody did that before I couldn't tell you, you had to specify which uid to map meaning your config had to know gratuitous details about your build system.

Anyway, I remember how somebody had a problem because their cpio.gz filled up more than half their ram and it failed with initmpfs but worked with initramfs. (Due to 50% of total memory being the default tmpfs size limit, so it filled up during the extract and stopped extracting.) I don't remember if lkml was copied on the email exchange but it resulted in this blog entry from the affected party.

Meaning I need to be able to specify "no really, rootfs should be ramfs" unless I can pass through size= to tmpfs options, or otherwise there are real world failure cases that hit existing people.

Unfortunately, some people clearly still don't get it. (Those are instructions for copying your initramfs into a tmpfs mount and then doing switch_root. My patches to let rootfs _be_ a tmpfs were merged in 2013.)

January 29, 2018

I wrote a thing on hermetic builds. It's related to the shared library part of the toybox design page which came up when the bc guys want an external lib.bc file to implement bc -l and I said might as well make it a big string constant in its own file (or with some #include magic).

January 28, 2018

I should probably have a page somewhere of "classic links", on topics that I should remember to introduce people to. (I have a links page but it's old and doesn't have summaries. I tried to put a few on the kernel docs page I used to maintain but lot access to update that in 2011.

One is the "Resource Curse", which is the problem that if most of a country's income comes from something like oil revenue, the country's government doesn't need 99% of its people. If you can't strike for better conditions because your labor is neither the source of income nor the thing that income is buying (everything, including cheap labor, can be imported), you have no natural leverage over those in power.

This is why you get "oil oligarchies". Countries like Russia and Saudi Arabia that earn the majority of their income from oil tend to have zero respect for human rights because if a plague wiped out 99% of their population the ruling elite wouldn't necessarily lose any income or amenities.

This is one of the reasons people are fighting for basic income as we automate away entire sectors of the economy: a century ago more than half the population worked as farmers, now it's less than 1%. The service and transportation industries that replaced them are also being automated away. This isn't a new problem: the Luddite movement protesting textile factories automating away weaving jobs happened over 200 years ago. But the erosion of the bargaining power of labor during the lifetime of the Baby Boomers has led to a real possibility of a technology-driven Resource Curse where the government doesn't need the people because we've got solar powered factories delivering 3D-printed goods via self-driving drone, and less than 1% of the population has any work you'd notice stopping if they went on strike. The Boomers won't live to see this, but the rest of us might.

I'd love to set up a conversation between David Graeber and Clay Shirky where they talked about this sort of thing for an hour. I really want to hear what they'd have to say, because I've got nothin'. (Shirky's Looking For the Mouse talk and Graeber's Bullshit Jobs essay play off each other quite interestingly.)

A persistent problem is that rich people are insulated from the consequences of their actions by a cushion of wealth, so they can be DAMN STUPID. (Hence the libertarian fish tank filter issue.)

And the anti-global-warming people are the tobacco institute are the leaded gas defenders, there are some good writeups about how those are literally the same people moving from one think tank to another as the funding sources change over the years.

And writeups on how capitalism is all about cornering the market and creating scarcity...

Sigh. I do my own writeups sometimes, with links to other people's stuff, but they get buried and lost in this blog. Dunno where else to put them. I haven't had a regular column with an externally imposed deadline since The Motley Fool days. (And those old archives are buried too, even things that made quite a splash at the time...)

But really, there's stuff out there that people should already know. Most of them _don't_, and I should have a place to point them for backstory.

January 27, 2018

Packed out of my hotel room by the noon checkout, although my car's still in their parking lot at the moment. I meet the apartment manager at the new place at 6pm to move in there. (No furniture but I brought two sleeping bags and a tray table to put my netbook on. I should buy a folding chair, I wonder where would sell that if there's no Target around here?

I looked for a clean quiet room, in walking distance of work (about 15 minute walk) with a shower/stove/refrigerator (pity it's gas, but oh well) with controllable temperature, outlets and a lockable door. (Well, it was quiet when I was there, we'll see how it is long term but I have earbuds and can get earplugs.) This fits those critera, and is quite reasonably priced.

And it has NO CATS IN IT. I might actually be able to get through the rest of the toybox roadmap in a finite amount of time. We'll see.

I type this from a starbucks. Well, a sort of starbucks. It's a corner of the grocery store I found (Metro Market, 2 blocks from the new apartment and more or less on the way to work from there) that has INSTALLED a starbucks, which opens on the 31st. Until then it's a seating area. I'm all for it. (No outlets, phone battery's already dead, netbook's at 38%. The replacement battery Fade ordered is regular size, not the jumbo size ones which last a long time but stick out awkwardly in a way that means I've now broken two of them.)

I've mostly been reading and closing browser tabs. So much backlog...

January 26, 2018

End of my first week at Johnson Controls. It's nice, for a Fortune 500 corporation that's put me in a cubicle. I don't see a problem doing 7 months of this.

I found a broom closet for $575/month with most bills included (you can get really SMALL efficiencies if you try), signed all the apartment paperwork, and today got a cashier's check for the proprated first month. They say I can move in tomorrow at 6pm.

Heart still beating way too fast this evening. I gave up and bought some chicken and one of the steaks the grocery store had on sale. My hotel room has a kitchenette in it (it's a lovely place, which has apartments on the top two floors. I found this apartment by talking to their apartment people, and they got me something in another building they manage the next block over). A week of fasting seems long enough for now, maybe I can atkins for a bit.

I've read organized, detailed diet plans with Intermittent Fasting and Keto Protein Loads and really, I'm not good with this. I can manage "do this" vs "not do this at all" distinctions. I suck at exerting consistent willpower over regulation of amounts over long periods of time, I have other things to DO. So "not eating today", "not eating carbohydrates"... That's about the level of granularity I can manage.

Hmmm, maybe I should find a gym. These tend not to work for me, but I'm still establishing a routine here. Walking to work and back builds a little exercise into my day, so that's nice.

January 25, 2018

I've been more or less fasting for a week now (I'm 80 pounds over what I weighed in college, that's like 1/3 of my current body weight), but something's going weird this time. My resting heartbeat lying on the bed at night is over 100 bpm, that doesn't seem right.

I've been using caffeine as an appetite suppressant, which amounts to a diet monster energy drink and a 1.5 ounce piece of "driving chocolate" per day this week (which is like _two_ energy drinks worth of caffeine, and I eat it in small chunks through the day). But if I stop having caffeine arond 4pm and it's 9pm, shouldn't it have worn off by now? Hmmm...

Last time I did this I leaned heavily on McDonald's Side Salads (15 calories by themselves, still less than 50 with half a pack of vinagrette dressing), which was fine for the drive here but the closest McDonald's is like an hour walk from my hotel room.

On tuesday I found a can of "monster muscle vanilla" and had my 200 calories all at once (with actual protein), but it was that convenience store's last can, they haven't restocked, the grocery store I found doesn't carry it, and google is unhelpful. (Hipstercart claims they can get it from kroger, but the nearest kroger is halfway to Chicago so I'm not sure what they mean by that.) There isn't a Target downtown either.

Of course another thing that does this to my heart rate is food poisoning, and without the salads my digestive systems seems to have entirely shut down this time. I wonder if that's related...

Broke down and bought two scoops of the "chicken and gravy" stuff the grocery store had. Absolutely delicious, and let's see if that settles my system...

January 24, 2018

I've been fasting on this trip, by which I mean eating 15 calorie McDonald's "side salad" with 40 calorie vinagrette dressing (using half a packet). but at one stop I failed my saving throw vs free pie because if you ordered through the kiosk, you got a free apple pie. McDonald's is trying to turn itself into a giant vending machine with no humans working there, as predicted by the expanded version of my old three waves talk, where stage zero is an idea you haven't acted on and stage 4 is fully automated with nobody working there anymore. Neither is a "business" so I didn't write about them for The Fool way back when, but it's kind of the full life cycle. "Computer" used to be a job title people did. Telephone operators used to connect every call. Elevators had operators before they had buttons. Further back, every household used to spin and weave and sew its own clothing, grow and preserve its own food...

There was a display of farm statistics at the last rest stop heading out of Texas, neatly explaining why "basic income" is now possible: a century ago we had 60% of the population working on farms (and a century before that it was 80%), now it's 2%. It was an Oil! Oil! Oil! display touting Tractors! and Chemical fertilizers!, but along the way we had the "green revolution" with dwarf wheat quadruple food production with better plants, so either way a smaller fraction of the populace is now producing way more food. (Most corn isn't for humans, see also the circle of rice.)

This means, strictly speaking, we don't _need_ the work over 90% of the population does, as in we're not going to starve without it. (But housing! The construction industry employs 10 million people, that's about 1/4 of 1% of the population of the country, 2% to <10% is a lot of slush factor for "ok, maybe necessary". And yes, I'm glossing over the can of worms that is healthcare, given how utterly screwed up it is in the USA, but most of "healthcare" is a giant bloated insurance industry and about half the rest is an administrative bureaucracy engaging with said insurance industry. Googling for per capita statistics, between europe and the US I get 3 doctors, 10 nurses, 2 pharmacists, and 1 dentist per 1000 people. Altogether that's 1.6%, still plenty of slack in the <10% actually assumed necessary above.)

Add in the revolution in transportation brought about by containerization starting in the 1950's, internet and smartphones, and the ongoing advances in solar power and self-driving vehicles, and meeting the basic survival needs of people is likely to take a _very_ small part of a modern economy a decade or so from now. Our big growth industries are things like entertainment. (Most people would rather hang out with friends, but who has time or energy when life revolves around sitting in a cubicle pretending to work most of each day?)

The knee-jerk argument against basic income is we can't afford to feed and house people for free, but exploding prison population? No problem! QE/bank bailout? Of course! If 2008 made one thing clear, it's that modern money is completely made up, it's numbers in a computer that the rich and powerful can edit on demand by _trillions_ of dollars, their only constraint is making sure the rest of us keep believing in it, respecting it, and chasing it.

The theoretical problem with printing money is inflation, so you tax the excess money away. The actual problem with adding money to the system is it pools in the pockets of rich people, so you have to tax _them_, and they complain loudly, with entire think tanks tasked with lying ot make them look indispensably important.

Rich people claim they're job creators but they're not: supply comes from workers and demand comes from everybody buying stuff they want or need. Billionaires are gatekeeping middlemen. But even assuming they were correct, the "incentives" argument gets cut out by real world research showing Say's Law doesn't kick in below a 70% tax rate. And what's another billion to a billionaire except a way of keeping score? Compound interest says they can spend millions of dollars every day for rest of their lives and end up with more money than they started with. It _doesn't_ run out. Techie co-founders like Paul Allen and Steve Wozniak (or founders like Jim Manzi of Lotus 1-2-3 fame) quit at $100 million because at that point more doesn't MATTER, the interest buys you a new house each week. They never have to do anything _useful_ again in their lives.

The people who continue to actively accumulate wealth into the billions are either driven by something other than money, or think they're bidding on the Titanic's Lifeboats and can never be "rich enough" to sleep soundly. (This is a self-fulfillig prophecy when their own asshole behavior in pursuit of wealth is the disaster they expect to be sending torches and pitchforks after them someday.)

David Graeber wrote about BS Jobs, which are useless jobs that produce or accomplish nothing. Many other jobs are only mostly useless, a 40 hour work week with 4 hours of actual work is fairly normal. They're created to satisfy a capitalist society's need for people to be employed in order for the people to be valued members of society (I.E. "Productive members of society"_ without producing anything anyone needs. Then there's entire industries like Tax Preparation that defend themselves via lobbying or similar, but are completely unnecessary. (Your information's already been reported to the IRS, in sane countries there's a website or similar you go to that has all the forms already filled out. You don't have to pay hundreds of dollars to pointless middlemen bureaucrats.) I'm also reading about how underemployment of lawyers is the new normal. There are no "safe" jobs, and many of the ones with good salaries tend to involve a modern guild like the American Medical Association that restricts membership.

But automating away all the jobs isn't a _bad_ thing if you kill enough billionaire middlemen intentional bottlenecks to clear the way to provide basic income, with which people can find new things to do. Creativity is _helped_ by having free time/energy/flexibility to play. Steve Jobs and Bill Gates didn't start new businesses because their survival depended on it, both were supported by their parents well into their 20's. They wanted to move up and have an impact on the world. As Graeber said, 99% of people not doing anything useful with their time is no different than 99% working retail jobs at Sears before Amazon mail orders came from a robotic warehouse to your door by delivery drones.

This is the kind of stuff I muse about on long cross-country drives. We're waiting for the Baby Boomers to die off so we can reach the kind of post-capitalism future Star Trek predicated half a century ago, but which they're too old and set in their ways to ever believe could be real even with solar power and self-driving cars and smartphones. Our problem isn't famine, it's obesity. There's a _distribution_ problem, since 2008 we have a simultaneous problem of abandoned houses and homeless people, that says the way we choose to organize society _sucks_. "We've always done it that way" isn't helpful when the rules change.

I do worry about the resource curse: a government that doesn't need its people tends to suck for those people, who can't strike for better conditions. But staying with capitalism isn't going to fix that. Again, the real problems are political, not technological.

January 23, 2018

Had to set up a new xubuntu system, 16.04 this time (if work is _paying_ me to use something with systemd...) and the procedure is always changing. This time the way to get the scroll bars back is to edit /usr/share/themes/*/gtk-2.0/gtkrc (where in this case the * is Adwaita, that's the theme selected in settings->appearance->style) and switch "GtkScrollbar-has-*-stepper" from 0 to 1, and also to change the GtkRange-stepper-size to 13 (from 0). (In theory you can set it globally but in practice every xubuntu theme manually sets these, overriding the global setting).

Without scrollbar arrows, scrolling the display up or down a fixed number of lines requires fiddly litle movements with the mouse and isn't always possible at low screen resolutions. With the arrow, click once to go up one line. So naturally, ubuntu disables them.

The SH4 VoD system has shown up in Austin! In a way that required a signature to accept delivery. I am in Milwaukee. I'd have them forward it to Fade in Minneapolis so I can pick it up when I visit (only 5 hours away, longish but reasonable weekend drive)... except for the requiring signature for delivery part. Hmmm...

January 22, 2018

Made it to milwaukee, first day of the new contract. Reading printouts, waiting for IT to drop off a computer, listening to long "this is the project" lectures from multiple coworkers. Pretty standard so far.

Quiet time in hotel room afterwards, cat-free. Luxury. (I napped, due to all the fog I was up driving last night until 2am.)

I just did "diff -u <(git diff toys/*/fmt.c) <(diff -u fmt.c fmt2.c) | less" with malice of forethought, because I saw this and went:

$ git am 0001-Un-default-fmt-1-while-it-s-in-pending.patch
Applying: Un-default fmt(1) while it's in pending.
error: toys/pending/fmt.c: does not match index
$ git diff toys/pending/fmt.c tests/fmt.test | diffstat
 tests/fmt.test     |    7 ++++
 toys/pending/fmt.c |   76 ++++++++++++++++++++-------------------------
 2 files changed, 42 insertions(+), 41 deletions(-)

I should really finish that. I wonder if I left myself a blog entry talking about what I was doing... No I didn't. Gotta read the diff.

A failure mode when I get _really_ overwhelmed is having a half-dozen tabs in a console window somewhere recording the state of an ongoing cleanup, where the backscroll shows tests I'm running that need fixing, experiments I did against multiple versions, and so on. If my netbook reboots before I get to a good stopping point and write it down or turn it into proper tests that TEST_HOST passes, and then I don't get back to that particular command for a month, I often wind up just "git resetting" the file and losing days of work that it would be easier to just redo.

This is why I call it "swap thrashing". I realy hope to be able to flush some cache on this expidition, as well as becoming flush with cache. (Sorry, couldn't resist.)

Elsewhere, the debian sh4 maintainer is being very nice and sending Rich Felker and myself a pair of cheap taiwanese Video On Demand boxes that (can be made to) run sh4 debian, and when Rich was talking about tracking down the right adapter to hook up the serial console, I asked to be kept in the loop. This led to the following exchange which I record here so I don't have to type it again if it comes up in another context. :)

> As I said, it’s already pre-installed with Debian Wheezy. I tested both boxes.

I was talking about Rich's attempt to get a serial console.

Without console output the box provides an all-or-nothing canned distro that has to bring up a large chunk of userspace before you have any output. So if I upgrade the kernel from -rc1 to -rc2 and it has a problem with some driver halfway through, I never get to see how far the boot got. If I tweak musl and sshd didn't come up, all I know is sshd didn't come up. I can't rdinit=/bin/sh or rdinit=/bin/helloworld-static to see what _does_ work. If device tree version skew can't find the interrupt controller because they changed something and the real problem is I need to upgrade dtc now, I have no trail of breadcrumbs to track that down.

> Connect power, ethernet, wait a few minutes until the LED is solid blue.
> Then check your router/DHCP server which address the box received, then just:
> ssh root@$IP
> Password: root

Which means that if the kernel doesn't boot all the way through, successfully extract its root filesystem, get through its init scripts far enough to successfully configure the network, and launch a daemon against a working C library, all I know is "it didn't work".

I've fed cpio.gz to kernels that only had cpio.xz support configured in. I've seen upgrades introduce a kconfig guard symbol that switched off BINFMT_ELF. I've accidentally dynamically linked something I meant to statically link that the init script depended on. I've seen binutils version upgrades make it write an inappropriate instruction because now it needs --no-really-stop-it-with-the-vector-extensions in ./configure, if I can't see the illegal instruction printk during the kernel boot that would really not be fun to track down.

I've worked on enough "If I change anything, it either works or it doesn't with no diagnostic information in between" systems over the years to know I probably wouldn't poke at anything that brittle in a hobbyist context. It would go back on the todo heap and stay there because I'd be afraid to touch it.

Possibly I could try getting a netconsole working on a static address (although that's still pretty iffy about early boot messages, many moons ago there was some work to create interruptless network driver stubs ala the early_printk serial drivers, but I think Alan Cox shot that idea down? Don't recall and the pages google's finding say netconsole just doesn't do early boot messages before interrupts are enabled, which is basically when it's about to launch PID 1 (interrupts = we can drive the scheduler now)...)

> Check /dev/sda1 if you want to see the uboot config.

With a serial console I could use u-boot interactively, and set up tftpboot and all that fun if I really wanted to (without even persistently changing the uboot config).

Some kernel developers won't touch a box without a jtag, but I'm the "stick printfs in everything" kind. With serial console you can get it down to two lines right at the start of "it's running code":

This example is missing the real-world "spin checking the ready for output bit in the status register" part, but you can usually track down the appropriate uboot serial output driver and figure out what your two lines are. Or break down and read the spec sheet. :)

Quiet hotel room without cats. I get so much more done here, even if at the moment it's still mostly just catching up on email...

January 21, 2018

So much fog approaching wisconsin. Stopped at a McDonalds for a couple hours to see if it would clear up, and it got worse instead. Oh well, got to catch up on some email, anyway.

I've recently noticed that "I've publicly said this 5 times" doesn't mean other people have heard it, so here goes again.

Speaking of which, it's possible the "minimal" system will grow a fifth required-ish package: cryptography. (Largely thanks to out-of-control state surveilance bureaucracies trying to endlessly expand their budgets.) If public key signing is required to verify package downloads (not just checkign a hash), or https:// downloading becomes necessary for the base OS build (we're flirting with that already), then that doesn't really belong in any of the above because "not leaking data through crypto side-channels" is its own area of expertise needing its own set of experts doing their own package.

Except... ktls is half a solution exported by the kernel already. It's possible some crypto is in scope for toybox (such as https) using ktls. Right now it's just half the plumbing and you need a big wrapper around a member of the openssl family to use the ktls plumbing the kernel provides, but maybe that's doable and/or less of an issue in future?

Encryption is not within scope for toybox because of the same zlib/curses problem: external libraries _must_ be optional so we'd need to provide a simple built-in version of their functionality, and I ain't rolling my own cryptography. (Hence wanting an stunnel style solution for wget and httpd forever, without which neither command is hugely worth doing...)

January 20, 2018

I stopped at a McDonald's in Texarcana to recharge my phone on the drive up to wisconsin, and I saw email from the manager from back when Large Phone Manufacturer That Still Wishes To Remain Anonymous sponsored some toybox work a few years back, and I accepted the link because for once it's somebody I actually know. (Well, we never met in person, but I sent her a lot of email.)

This opened linkedin, and one of the links on there was an incredibly vague position at... Google Austin. Which I found greatly amusing, and I almost tweeted "I really, _really_ shouldn't apply to this..." with a link, but it would require too much context to explain.

But "too much context for twitter" is what blogs are for. So:

Yes I just signed up for a 6 month contract in wisconsin (which I am driving to, so I'd have the car up with me), but last time I applied to google it took 8 months to work through their hiring process, so I wouldn't expect them to conflict. (Besides, given my previous experience with Google I wouldn't expect to _get_ the job, I'm mostly just amused.)

The _first_ time a Google recruiter called up and tried to hire me was over 15 years ago. (I took the phone call in the apartment I had when I worked at WebOffice, so 2001 or 2002.) I think I was on Google's radar because way back in 1999 when I wrote stock market investment columns the portfolio I covered included Yahoo, and I wrote an article mentioning I preferred Google's technology. And google sent me a t-shirt and a bunch of stickers, because they said it was their first stock market coverage. (This was 5 years before their IPO, they were still a "linux search" site in beta, I think I heard about 'em though slashdot. That's how long ago it was.) Or maybe the recruiter called me because of my posts to lkml, who knows?

I've never particularly wanted to move to California, although my reasons why (expensive and earthquakes) don't seem to apply to Tokyo for some reason. Huh. (At this point I suspect it's inertia.) But Google recruiters kept calling like clockwork every 6 months for the next few years.

Then ten years ago ChromeOS came out, which sounded like fun (this was _after_ I co-authored a paper on why Desktop Linux hadn't happened, so yay new approach with a hardware vendor behind it who could get preinstalls). So I followed Google's "apply to work on ChromeOS" link but selected the Dublin Office from the site selection pulldown because I'd never lived in Ireland and that also sounded like fun. This confused Google's hiring process (seriously, I break everything), so they didn't get back to me for a few months, but I was in the downtime between contracts and didn't mind. (Consulting meant I earned enough I could take time off between contracts, which is when I got most of my open source programming done. This is back before marriage led to big house and other people to support, or at least reassure that I know where the money is coming from next month).

Google's version of a cat's "when in doubt wash" seems to be "Site Reliability Engineer", which they could do in Dublin, and it sounded worth a try, setting off an odyssey of endless phone interviews, culminating in an all expenses paid trip to the Googleplex (my first time in Silicon Valley proper), and then deculminating in some sort of telepresence interview _after_ that in Google's Austin office (next door to Qualcomm, northwest corner of I35 and Mopac, and deserted except for a receptionist when I arrived) where somebody on the other end of a camera wanted me to write code in chalk on a blackboard. As I said, I confused them, and they spent a long time making up their mind...

Except they didn't. A full 8 months after I'd applied, when my bank account was getting kind of thin waiting for a decision (I'd have gone to work somewhere else months earlier but I was waiting to see what Google thought), they said I'd passed all the interview hurdles, my resume was sitting on the desk of whichever cofounder it was who personaly approved all hires, but the position I'd applied for had been filled and I needed to restart the process from scratch.

I thanked them for their time and got on with my life.

The next google recruiter to call me 6 months later was confused about my status in their HR system. I explained my Interview Odyssey and resulting reluctance to reopen that can of worms, she put a note in my file, and they stopped calling for a while.

Shortly after I did my 2013 toybox talk about hijacking android for my own purposes to steer the computer industry, I got a call from another google recruiter (no, he hadn't seen my talk) and went "ok, why not" and went through the thing again, except I'd just finished up my 6 month contract at Cray in Minnesota an was spending a week with my sister and her 4 children before returning to Austin, and hanging out with small children exposes you to every stomach bug they pick up at school, so I had to cut the interview short to urgently visit the bathroom, and the Google guys decided _not_ to continue, an I went "ok" and got on with my life. Haven't heard from a google recruiter since.

Google merged toybox in 2015 and has been using it since, but toybox development's stalled badly as SEI struggled to stay afloat. As the company lost staff instead of staffing _up_ we all wound up doing 4 jobs apiece (the corollary to Brooks' Law I learned at timesys remains true, removing people from a project is as big a delay as adding them, you spend all your time on "knowledge transfers" and then the remaining people have to come up to speed on tasks the departed used to do) and the stress started affecting my health.

At the start of the year I went "this is the _second_ set of taxes I'm going to have to check my bank statements to see which paychecks they managed to make, _after_ dropping us to half pay", and when a recruiter offered me twice what SEI had paid back when we were full-time (so 4 times now even if they _did_ make every scheduled paycheck) I took it. (The email I got from Elliott talking about "the thing that replaces toybox" helped with my decision to sign the contract. The advantage of a 9-5 job in an office is you know when you're _not_ working, and can do open source stuff without guilt...)

I was tempted to apply to the linkedin thing in part because the idea of using Google's "20% time" to work on toybox was just too ironic. Google's never paid me a dime for toybox. Elliott bought me lunch once. And they gave me an "open source award" (along with a dozen other people) that came with a $250 gift card, but I had to go to payoneer's website to activate the card and the login credentials they sent me didn't work. I even poked the Google open source award coordinator to confirm the credentials, but never could log in and after enough failed tries it disables itself. (I still have the card in my wallet, probably expired by now.)

And yes, I'm aware 20% time no longer really exists, that's a whole 'nother rant (that links to December 1 but the topic continues through december 2, 4, 5, and 6, I should collate old blog entries into proper writeups someday. My todo list runneth over. I prepared and presented a proper talk on that topic at Flourish last year, but they never posted the recording.)

Anyway, the bit about the google job is moot because when I clicked through it went to an application page on (with the same info), and when I clicked on "apply" there Chrome gave an error page because the site "redirected me too many times". (I repeated this 3 times to be sure it wasn't transent, then got on with my life.)

I break everything. And I continue to confuse Google's recruity-bits.

Anyway, back on the road to Milwaukee.

January 19, 2018

Finally finished flushing the lkml and qemu-devel folders into "2017" sub-folders so thunderbird doesn't choke on the giant mboxes bigger than it can handle (making email download sit there and twiddle its thumbs for a couple minutes when the filters try to move the first message into that folder, sometimes making the smtp server time out).

And this meant I started reading qemu-devel at the start of january, and noticed Laurient Vivier pushing his m68k support patches again. Looks like seriously this time. (Yay!)

So I wander to my qemu directory, make sure it isn't locally patched (git diff), do a git pull, start to ./configure, kill it and do a make clean just in case, and...

$ make clean
  GEN     aarch64-softmmu/config-devices.mak.tmp
  GEN     aarch64-softmmu/config-devices.mak
  GEN     arm-softmmu/config-devices.mak.tmp
  GEN     arm-softmmu/config-devices.mak
  GEN     i386-softmmu/config-devices.mak.tmp
  GEN     i386-softmmu/config-devices.mak
  GEN     ppcemb-softmmu/config-devices.mak.tmp
  GEN     ppcemb-softmmu/config-devices.mak
  GEN     x86_64-softmmu/config-devices.mak.tmp
  GEN     x86_64-softmmu/config-devices.mak
  GEN     config-all-devices.mak
config-host.mak is out-of-date, running configure

And so on and so forth. It generated dozens and dozens of config-target.h and blah-commands.h files just so it could delete them!

Meanwhile, "git clean -fdx" took like 2 seconds. Except for the part where .gitinfo can tell it to ignore files which don't get deleted, and I'm reluctant to try to add options to override that because qemu uses subtrees for dtc and stuff and I don't want to delete them.

Projects seem to have a natural lifecycle where they get so complicated fewer new developers come on board, and eventually they starve for resources when the existing batch ages out. The average age of linux developers is Linus's age, and he's something like 47 now...

January 18, 2018

I have a failure mode during software development, which is naming stuff strangely. I had to do a cleanup pass removing the "9 princes in amber" references (a book series by Roger Zelazny) because after enough repetitions of the variable "pattern" I threw a "logrus" in there in self-defense and it spiraled from there.

Now I'm cleaning up ps, which has -o fields living in "struct strawberry" with the variable length char array at the end of course called "forever". This doesn't help anyone understand the code.

I'm banging on gzip right now and resisting calling the --rsyncable option --titanic during development.

All this should have been cleaned up and properly explained long ago, I've just been so drained trying to keep SEI afloat. And now I'm packing to move to Wisconsin for half a year.

January 17, 2018

The reason "sudo echo 0 9999999 > /proc/sys/net/ipv4/ping_group_range" doesn't work is the shell opens the redirect file before calling sudo, which means it does so as your normal user. Alas putting quotes around the sudo arguments doesn't work because it doesn't re-parse the command line so tries to run a single command with spaces and a > character in it.

This is why I wind up doing sudo /bin/bash a lot.

January 16, 2018

Youtube cut off monetization of channels with less than 1000 subscribers. It's the hobbyist->employee->bureaucrat progression again. I should really give a proper _recorded_ version of that talk somewhere.

I gave it at Flourish. I was prepared, reasonably well rested, and gave a version I was proud of. They recorded it. The recording never went up. There is very little a conference an do to annoy me more than _promise_ to post a recording of a talk and then _not_ do it. Sadly common problem: Flourish screwed up the recordings both times I went there, LinuxCon Tokyo 2015 had a video camera that apparently wasn't on, Ohio LinuxFest pointed a video camera at me and then only posted audio....

Penguicon failing to record Neil Gaiman's "crazy hair" reading after which I got him to say "By Grabthar's Hammer, You Shall Be Avenged" into the microphone with NO TAPE IN THE MACHINE was the Science Fiction Oral History Society's fault, but it was at a conference I co-founded so that makes it my fault. I added a "new thing" to each year of Penguicon (year 4 was LN2 ice cream, we dumped the extra LN2 into the swimming pool sunday afternoon), and year 3 (I think?) I bought 5 MP3 lecture recorders with a promised 12 hour battery life and taped them down to the tables in each panel room. No idea what's happened to that since Matt Arnold drove away all the people who used to run it, I haven't been back in 10 years...

January 15, 2018

Ok, fixing ps -T. If I go "ps -AT" I get 13 hits for chrome (pid 1401 and 12 threads). But if I go "ps -T 1401" I get just one hit (pid 1401, no threads).

And done. The /proc layout repeats the thread under the pid, so /proc/123 will have all hte process information for the parent, and then there's a /proc/123/task/123 that _also_ has it. There's a check to notice that and skip it when we're parsing threads, which was supposed to copy the parent pid into the child's PID slot when it doesn't skip it.

I.E. if the parent->PID and my->TID were equal, return. Else my->PID = parent->PID; The else bit was missing.

January 12, 2018

Signed the contract for the new job. I need to be in Milwaukee by the 22nd.

My next choice is do I drive up or fly up? I have southwest credit from cancelling my return flight from ELC last year (work flew me from LA straight to Tokyo instead, yes those trips were always on that short notice), and if I don't use that it expires soon. But if I drive up I'd have the car with me and can drive to see Fade on weekends. (It's a little under 5 hours drive each way, reasonable to drive up friday after work and drive back sunday evening. Flying each way probably takes _longer_ if you add in getting to the airport, through security theatre, and then public transit through minneapolis.)

Decisions, decisions...

January 11, 2018

I tried -rc7 in mkroot. The arm build grew a perl dependency again. The x86-64 build died because it couldn't find an ORC unwinder. Wheee!

New battery arrived! It's 6 cell rather than 9 cell but hopefully that means it's less fragile. (I ordered 2 9 cell batteries and wound up breaking both, they stick out awkwardly as a sort of footrest leaving the keyboard at an angle and only letting the screen fully open if everything aligns exactly. The 6 cell ones I've never broken (just worn out) and I can almost lay the screen flat back.)

Downloading email is _so_ much faster now I've cleaned out LKML. I'm still shoveling out qemu-devel, and then buildroot's got over 100k messages in it that should probably get moved out of the folder my mail filters are dumping new messages into. (You can have an enormous mbox that doesn't get USED during download and it won't slow down email downloading.)

Yes, this is related to the "I have to download from gmail via pop because imap is far more broken".

January 10, 2018

Met with the recruiters for the new job, picked up the pile of paperwork to sign. Feeling kind of morose, like I'm letting SEI down.

I've spent 3 years working for Jeff, which I think is longer than I've been at any other job. (Even beat out my first job at IBM by a few months.) I believe in what SEI's trying to do, and stayed a year and a half longer than they could reliably pay me, and I'd happily go _back_ there after this contract... if there's anything left. Jeff insists that there's a new contract coming soon that gives us a change of direction and fresh funding, except the fix for everything has been Real Soon Now for 2 years. This is the THIRD set of investors that have deliberated at length about giving us money. The stress is killing me, I need a break.

Alas, you can't fund from operations targeting utilities without bootstrapping to a large size and going through standards compliance nonsense. To get around that Jeff parnered with a big company that screwed us over for internal big company political reasons, and then he tried to put together a funding round based on another big company that was once _again_ paralyzed by big company internal politics. Disruptive technology 101: a large existing corporation cannot commercialize anything new in-house, it can only buy it once it's already proven.

This is not on the tech side of the house, I dunno how to fix it. Make a product and sell a product to people who will use the product I understand, navigate corporate status/dominance games where everything is some shade of affinity fraud and nobody involved in the decision making will be personally affected by the outcome except politically... that's not a domain I've spent a lot of time building skills in, because I sympathize with the people polishing guillotines every time it comes up. During the entire "postwar booom" period the top tax rate in the USA was 91%. (In 1963 they lowered it to 70%. Reagan lowered it to 28%, at which point our deficit exploded and corporations stopped investing in anything. Taxing profits makes companies spend money on research and training and all sorts of things that won't impact next quarter's numbers but are better than seeing the money confiscated by the feds. Lowering taxes makes them stop making any long-term investments in their business, their fig leaf being they can pile up cash and buy some other company that did all the right things later, the reality being they legally embezzle it all. Why is this hard to understand? This guilded age royal court nonsense is a _sickness_. It is symptomatic of an unhealthy economy, these are parasites feasting.)

Sigh. Happier thoughts.

My sad little netbook is plugged into wall current. It'll run without a battery, but isn't happy about it.

Thunderbird's terrible at dealing with large mbox folders, where large" is "a year of linux-kernel or qemu-devel". So I've created "lkml-2017" and "qemu-2017" subfolders and am once again copying all the year's messages into them and compacting stuff. It's REALLY slow.

You click on the first message, scroll down to the end of the range you'd like to copy (too much and it triggers the OOM killer, I can get away with maybe 20k each pass), shift-click on the ending message, then wait multiple minutes for the highlight to happen, then right click on any of the highlighted messages and wait the same amount again for the pop-up menu to appear, then navigate to the folder you want under "copy to->", click, and go to lunch.

If you've highlighted more than about 25,000 messages the copy will complete (and it deletes them as it goes), but afterwards thunderbird does some insane processing that exhausts all memory, drives the system into swap, and eventually triggers the OOM killer to kill thunderbird. (That's assuming you don't think the system is hung because your mouse cursor takes 3 minutes to respond to attempts to move it.)

If it's less than 25k messages it just takes forever to complete. As in I went to the grocery store and it wasn't done when I got back. Did I mention 25k messages is maybe 2 months of lkml traffic? It's something like 350 messages a day, plus bursts of ignorable bot-generated nonsense. (Your patch failed to build against the -tip tree! Why are you mailing the list? The giant backports against -stable patch series need their own list, but nobody would read it, so...)

Mostly I read the web archive, but I need the messages to reply to.

January 9, 2018

Dropped Fade and Adverb off at the airport. New semester starting, she's going back to her dorm in Minneapolis.

Sigh. LWN's is it time for open processors article (in response to meltdown and spectre) doesn't even mention of j-core. It mentions openrisc, and clones of powerpc and sparc, and links to riscy's press release. I guess we look too dead to matter.

(I _cannot_ get excited about RISC-V, it strikes me as Open Itanium. They promised everything to everyone and are cashing very large checks, and I see no obvious reason for it to displace x86 or arm? And that's _with_ meltdown and spectre. Maybe china will standardize on it by fiat, but didn't they already try that with a mips fork?)

Of course j-core's still a nommu processor, so you don't _need_ a memory protection bypass because there's no protection to bypass, but... Rich hasn't posted to the linux-sh list in months, and it has an outstanding futex bug for how many releases? QEMU's sh4 serial console's been broken for ages and still not fixed? Our last VHDL code release tarball was 2016 (did that support SMP? I don't remember). We never got even _part_ of the VHDL code up on github...

Jen says that Jeff had a good meeting with the new investors yesterday, but they didn't sign a check at the meeting. Just like we didn't get actual money from the december meeting, or the november meeting, or the october meeting. Not even the money for the "statement of work" that was supposed to tide us over until the end of last year. (I.E. it's a quarter's worth of money we've already spent a quarter trying to get.)

I can't make this happen by myself.

Heard back from the recruiter about the Milwaukee gig. They want me, but the recruiter was trying to talk down my quoted hourly rate at the last minute? Confused.

I had my netbook closed on a bench, it fell off about a foot onto a tile floor, and the battery case cracked in 3 places. Wheee. The screen no longer opened because it was hitting a piece of cracked battery case, and pulling it off took off about half the plastic.

Running it without a battery right now. Fade's ordered me a new one. (Did I mention I know too much about how the sausage is made to be comfortable typing my credit card info in to a website _ever_? I'm aware having someone else on a joint bank account do it does not improve matters, and yet.)

January 8, 2018

And my netbook finally rebooted. I tried to reproduce a mkroot issue which meant a script ran oneit as root, which couldn't attach to the requested console, and on the way out it rebooted the system.

Todo item: fix that.

January 7, 2018

I've found the jpop group responsible for Miss Kobayashi's Maid Dragon's opening and closing music. It is All The Bouncy.

Appending it to my normal music playlist put it right after Demi Lovato's cover of "Take me to Church" and the switch between the two has gears grinding.

Listening to colorado video about demand charges being one of the big drivers for pairing battery walls with solar and going for "complete curtailment". I.E. collect extra solar in your battery wall, and when your batteries are full just switch off the solar panels. Never try to feed anything upstream into the electrical grid. Apparently getting to 80% of this is easy, getting to 100% is hard.

January 6, 2018

Fade took me to Dead Lobster, by which I mean I drove and she paid. Took the hybrid loaner car, which remains deeply shiny. I looked up its price (they're so clearly letting my use this thing as a form of advertising) and it's $29k. That's for last year's model, not the new one. It's not outside what I could afford, but it's outside my comfort zone.

When I was 7 years old I got all excited about the idea of compound interest, and was pretty sure I could retire at 30 (or at least get to the point where I earned more in interest than in paycheck), and I was on course to do that circa 1999 or so (earning $50/hour and offered $75/hour to stay, plus owned two condos that went up $20k each in price while I owned them, not bad for a 27 year old), but over the years instead of saving and investing I gave time and money to friends and family in need. I'm doing ok, but I'm not close to retired.

Take SEI: I've been on half pay there for a year and a half, and they haven't even made those reliably. They're making about 2 out of 3 paychecks these days, which means I'm down to 1/2*2/3 = 2/6 = 1/3 pay which is not sustainable with this house even without the flooding. And that's on TOP of the fact I could make twice that fulltime hourly rate if I went back to consulting, so I'm choosing to earn 1/6 my market rate. I don't care about money, but I do care about a _lack_ of money, and things like social security and medicare won't survive the GOP, so I need to provide for my own retirement. After ten years of marriage Fade's never had kids so I'm pretty sure that's not happening at this point, and she's up in Minnesota, so I might as well go back to the Lucrative Nomad lifestyle before age discrimination kicks in too hard. (It's easy to find work if you go where the work is and do what they pay you to do. I've worked from home on stuff I find interesting, but the stress is getting to me.)

At the start of the new year I decided to look around. I did a phone interview for a gig in Milwaukee on Thursday, and I'm told I'll hear back on that Monday. I very much want to see SEI succeed but I can't make customers pay their bills or investors follow through on their promises, and they're not really sponsoring toybox development anymore...

I got a reminder about the CELF deadline (which has been extended to tuesday). Do I want to commit to travel at this point? Hmmm...

Where did I leave off... ping.c! (Although if I'm to make proper use of that cortex-m board before innoflight asks for it back again, I should do tftp/tftpd since that tftpboots.)

I need to check timestamps in fractions of a second, and I vaguely recall I created a millitime() function which returns current time in milliseconds (for the pun if no other reason: it's millitime). But it's not in lib, it's in ps.c, which means I have a second file wanting to use it so I should move it to lib/lib,c, and looking at that I trivially cleaned up the last function there, environ_bytes(). Except that function should really take environ as its argument, and thus be able to iterate over argv[] too. But I shouldn't go down that tangent just _now_...

Hmmm, this implies that xparsetime() from yesterday should probably return milliseconds too. (When launching command line binaries, that's about the resolution you can expect. You need nanosecond accuracy for things like filesystem timestamps where you're reproducing a previous reading exactly, but not delta-from-current with pages faulted in from storage and a potential call to the dynamic linker in there before any of your code runs. Again, todo item for later.

Sigh. It would be nice if posix made proper use of C's object orientation. Specifically, in struct sockaddr and friends, wouldn't it be nice if:

struct sockaddr {
  short family;
  // whatever else

struct sockaddr_in {
  struct sockaddr sa;
  blah blah blah;

struct sockaddr_in6 {
  struct sockaddr sa;
  blah blah blah;

Right now you can typecast either to struct sockaddr and works fine, but it's not obvious what portion of that you can use. With the above you could &(sockaddr_in->sa) and not even have to typecast. (You'd still have to typecast it back once you knew what the type was, the pointers will be the same because a pointer to a struct is a pointer to the first member of the struct, there can be no padding or alignment space at the beginning. But right now it's implicit, not explicit, and if I declare a function to accept "struct sockaddr *" you have to typecast to call it with sockaddr_in or sockaddr_in6. At which point it might as well just be a void *, because that's what I'm going to typecast it TO to make the compiler shut up.)

There are ways to declare your data so "I know what I'm doing, let me do it" does not require hitting the compiler with a rock, but the network stack doesn't do it that way.

(But no, people think you need C++ for that kind of thing. You very much don't. C++ only makes things worse. Because they don't teach how to do it right, and the berkeley guys especially spent their first decade doing CRAZY THINGS. Everything's a file... except network interfaces, those aren't. Ken and Dennis were very good at finding the "sweet spot" between not enough capability and too much complexity, and I greatly admire what they accomplished. Many of their successors in BSD and AT&T, not so much...)

January 5, 2018

Cycled back around to ping. Specifying time between ping instances means you do fractions of a second, but I'm trying to restrict the use of floating point in the code and keep it under #ifdefs (to work on really tiny systems). So my infrastructure for that is xparsetime() (originally for sleep) which returns seconds and fractional seconds in two longs, and only uses floating point when the ifdefs are defined.

I want to add -i, which needs fractional seconds, and at the moment that means I need to turn its optargs from a number to a string (# to :) and call xparsetime() on the string myself. That raises the question of whether I should do the same for -s and -W, so the time parsing is consistent. But neither of those particularly care about fractional seconds, and the OTHER thing the optargs number parsing does is range checking and default value assignment. Having to do that manually raises the expense a bit.

Speaking of range checking, if you _do_ feed a negative time to xparsetime() the non-float path errors, and the float path returns the negative value, except if it's -0.5 then you have to check the seconds and fractions seperately to catch that it's a negative value, and really I should just check it in the strtod path. Alas, then it needs another error message which seems wasteful. Also strtod() can skip arbitrary spaces and allows a + at the front so checking for - at the start is more complicated than it seems... (So many corner cases.)

I could add an xparsetime() type to lib/args.c but there aren't realy enough users to justify it? The other big one is sleep, but there it's an optarg, not a flag argument, so sleep_main() has to parse it anyway, and in GLOBALS it would still have a sizeof(long) slot needing to fit 2 fields, and a struct that fits in 32 bits on 32 bit systems would have to be 2 short ints so it couldn't do nanoseconds, which eliminates about half the other uses.

Ah, I see: if you go "sleep -1" it says 1 is an unknown option, that's probably why I didn't care at the time. Of course you can do "sleep ' -1'" and strtod() eats the leading space and then parses it and returns a negative number, although sleep then returns immediately so it doesn't hurt anything...

Sigh, ok. Keep -w and -W doing the optargs # integer parsing, and have -i do something different.

January 4, 2018

All the bugs in the world. I updated my offline backups.

Wouldn't it be nice if we had an organization like the NSA that was supposed to find and publish the sort of vulnerabilities that make Hardison from Leverage or Finch from Person of Interest's ability to hack into any computer anywhere NOT FICTION? Instead of hoarding them so it can keep their budgets unlimited in perpetuity by blackmailing future politicians with the porn they browsed as teenagers, and treating any other possible use of the data (such as law enforcement) as compromising their sources? Wouldn't that have been nice.

I know it sounds crazy blaming those sorts of guys for vulnerabilities that go back before September 11, 2001. But we know they're _trying_, the counter-argument is they're not as _effective_ as they'd like to be.

January 3, 2018

The Call For Papers deadlines for both ELC and TXLF are coming up. I'm still sort of "too tired, dowanna travel, I should just podcast", but at the same time I should show the flag and I do have various things I should probably talk about: 0BSD and licensing stuff, mkroot, making android self-hosting... Heck, I could do a panel of just war stories. Haven't bothered to write up any proposals yet though.

Dropped the car off at Howdy Honda. In addition to the crunchy noises from the suspension when it hits uneven road (cv joint?), it's now making growling noises when it's cold and you turn the wheel. (Power steering pump?) It's a 2002 car, about 16 years old now (we bought it used). I've been waiting for app-summonable self driving car services, but that's like 2 more years for early adopters and maybe 5 to be ubiquitous in urban centers. (And in about 7 gasoline volume declines enough that the profit marging for refining, distributing, and selling it with the current infrastructure and transportation network goes negative, at which point a car running on gasoline isn't quite so useful. And yes the auto industry knows this so resale value's likely to decline well before then, but "when does the herd break and run" is always a hard financial question. All the manufacturers are switching over to electric cars now, but the first generation models are still too expensive for my tastes and when the self-driving subscription fleets show up why own your own? Don't sink a well when city water's 5 years away from reaching your neighborhood...)

So yeah, waiting out the awkward adolescence of yet another industry. I rememer the days of "when can I get an ISP instead of dialing in to my university or work", "when can I get broadband instead of dialup", "when is my cell phone good enough to stop paying for a landline", "should I just have a laptop and not bother having a desktop", "when can we switch from netflix mailing us DVDs to just the streaming", "hard drive or ssd"...

These days there's "when to get rooftop solar and a battery wall", "when can I get a development environment on my phone/tablet so I don't need a PC anymore"... There's usually some case where :I know where it's GOING but is it quite HERE yet", and a car is a large purchase that kind of imposes itself upon you at times...

January 2, 2018

Jeff just asked me to work on an 8-bit chip design with him, but I'm already stretched too thin on the stuff I'm already doing. The Big Push in november involved GPS, helping arrange investor meetings, trying to track todo items for the whole company, turtle manufacturing stuff, and of course the endless uncertainty. (During investor prep Jeff kept gaming out how the bloq guys might screw us over or flake, so we'd be prepared. About half the time I didn't know where I'd be sleeping the next day.)

Jen not showing up wasn't Jeff's fault but it meant plans changed and I had to try to figure out what Jen does and maybe try to come up to speed on the existing customer phone calls (maintaining their trickle of R&D funding) and see if Weekly Engineering Call With Jeff could replace Daily Engineering Call With Jen if she flaked completely. That's a management job I got sucked into a vacuum for.

Jeff tried to sit me down and teach me enough VHDL to help with the ASIC tapeout, despite niishi and arakawa with years of experience in it _not_ being up to help with the tapeout. He tried very hard to get me to track what RiscV was doing and I _cannot_ bring myself to care, it smells too much like an open source version of Itanium made from hype and overcommitted promises and absorbing all the funding in the world to be less interesting than x86, let alone arm. We met with a nice lady at a university who's doing a toy processor. We sat down to try to sort the instruction bit patterns of j-core so we could redo the front end more efficiently, but didn't have time to finish that. We started to triage the build system for a github release, but didn't have time to finish that. We talked about hooking up the GPS-stabilized nanosecond accurate clock to the userspace signal monitoring package, but didn't have time to finish that...

All this has put me way over mental budget on my normal ecosystem (which used to be aboriginal linux+busybox and is now toybox+mkroot/aosp). Trying to turn android into a self-hosting development environment is STALLED HARD. (Politics: the pixel 7 tablet is discontinued so all the google in-house testing systems are now chromebooks; chromeos runs android apps but what does this mean for testing android base layers? How is development shifting inside google? I haven't had a chance to ask. Whatever it is is happening without me and I'll find out 6 months later when it's too late to provide feedback that might influence any of the decisions. Oh well.)

I haven't done half of what I need to on SEI's Board Support Package because that hasn't really been my job in forever. The website is in pieces and the mailing list is silent because I'm not sure what I'm allowed to _say_. The website needs to turn into kernel Documentation/ files. The arch/sh and linux-sh stuff is badly stale upstream and in _theory_ that's Rich's task but in practice he hasn't got cycles for it (and he only cares about testing on real hardware, even though QEMU is what the upstream kernel guys can actually regression test against; the serial console's been broken for most of a year and we never fixed it translates to a perception that "this platform is dead"). I haven't kept up with new kernel developments in general for the quarterly releases, and I've had patches I've wanted to push upstream for a year, but haven't.

I have a significant issue that that my own projects look dead to other people. I haven't posted to the mkroot list since October. I spent some time getting mkroot closer to parity with aboriginal in terms of supported targets (the reboot was required by swapping out the toolchain for musl-cross-make) but I still haven't got the native toolchains working, let alone the distcc trick or the build control image automation layer. The last mkroot release was in June, using a 4.11 kernel (which is over a year old now).

I spent part of this vacation getting my technical development blog caught up closer to current, which means I've gotten it up to mid-september. (I have daily-ish rough draft notes-to-self in a text file but it needs significant editing and expansion to make sense to anyone else. Plus html tags and links and proofreading.)

I've spent the rest of this vacation trying to do enough toybox work the project doesn't look dead to the android guys. I got the smallest two commands promoted out of pending and I'm trying to deal with the new submission of fmt.c (from the android guys).

I'm sitting on the west coast Embedded Linux Conference call for papers and haven't submitted anything yet because I'm _tired_. It would be really good to show the flag there but my talk there last year and the one before that at were incoherent because I was too exhausted to prepare and give good talk. (And given my baseline fatigue and redeye flights one day was NOT enough to recover from jeglag in either case.) It hasn't gotten better since.

It looks like I can either stop doing open source development, or I can get a day job doing something less taxing which I can stop thinking about when I leave the office.

Sigh. Jeff talked about how great sitting down and grinding is, and I WANT to do that but I CAN'T because it's a constant stream of interruptions swap-thrashing between too many projects that never produce output and idle for so long between bursts that when you go back to them you spend all your time trying to figure out where you left off and why because you've forgotten all the context and have to reverse engineer your own code. This has been the failure mode of toybox development for the past couple years, now it's becoming the failure mode of EVERYTHING, because I can't focus and when I do carve out time I'm too exhausted to make good use of it.

Random example: waiting at the airport for the flight back from tokyo, I caught up another couple weeks on the j-core news page. Triaged, edited, and uploaded. Haven't touched it since, so of course it's now further behind than it was when I did that. And of course there's no https on that website even though doing so is like half a day's work. (Well, for Rich. Probably about 3 days for me, the update scripts are fiddly and there's a dozen implementations with no obvious winner because the one Let's Encrypt provided/recommends is overcomplicated crap so many people have made their own but _because_ there's an "official" one none of the others has coalesced a big community around it yet and become the obvious one to use.)

I remember when Jeff and I talked about moving all the servers to tokyo. A year or so back, we bought a USB drive to do backups to, and he had me install ubuntu on an old 32-bit machine he had lying around. Might have been the end of the trip with Tokyo Big Sight?) Out of curiosity I just ssh'd into and did a sudo aptitude update and it has 58 packages it wants to upgrade. I'm afraid to do the corresponding upgrade because if it breaks, what do I do? The person Jen tried to transfer wale's sysadmin responsibilities to was... me. The servers are in the back of an office in canada, I'm in texas.

I'm not sure I'm still making a difference here.

January 1, 2018

Happy new year.

Next low hanging fruit pending command to clean up, sorting by source file size, is logger.c. The main reason it's in pending is it depends on CONFIG_SYSLOGD. That kind of cross-command dependency is unpleasant, I try to either merge them into the same .c file (ala ps/top/pgrep) or move whatever they share to lib.

Since the actual function logger wanted out of syslogd was only a few lines long, I just inlined it in the two calls in logger, did the other obvious cleanups, and tried a test build... at which point I noticed the next problem.

The function I inlined is iterating through two arrays, facilitynames and prioritynames, which are defined in sys/syslog.h. But you have to #define SYSLOG_NAMES before #including that in order to get them. Why? Because #ifdef in the header is instantiating the array, which means if you #include it from two places you get two copies of the array.

The really STUPID part is I can't #include it from one file and then extern reference it elsewhere because the TYPE is defined in the same #ifdef.

One of Rich Felker's coworkers complained about this before, and clearly this was a case of glibc being stupid, but it's one of those things that shipped and now fixing it would break existing programs.

Back to 2017