Rob's Blog rss feed old livejournal twitter

2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2002

March 26, 2017

I got cc'd on a "Would github like to add 0BSD to its license list" thread while traveling, which I belatedly replied to. It's an important topic I should really spend more time on, so here's a summary of what I said (most of it in reply to Christian Bundy, the guy who apparently accidentally sabotaged 0BSD last year, and who I never had contact info for before now):

The name "zero clause BSD" was part of a strategy to promote public domain equivalent licensing by coming up with a both corporate friendly and hobbyist friendly version. This is necessary because post-GPLv3 too many programmers are lumping software copyrights in with software patents as "too dumb to live" and opting out of licensing their software at all. I'm trying to offer a palatable alternative, which requires being aware of and addressing a lot of issues.

The first problem is that lawyers dislike "public domain", as I explained here.

That's a reply to a thread where Google's lawyers asked musl-libc to remove "public domain" code so musl could be used in chromium OS. I encountered this personally two months ago at, where I had a ten minute argument with a Google developer whose position was that CC0 was a terrible license because it forces you to "give up your rights", but that my zero clause BSD was a much better license that he could use. (I tried to explain that they're equivalent but he literally wouldn't believe me.)

Laywers like BSD because AT&T and BSDi sued each other and AT&T lost for violating the terms of a BSD license, thus it's proven to provide paychecks to laywers. So what I did was take the simplest thing I could call a BSD license (specifically the OpenBSD suggested template license) and make a single small change (removing half a sentence). I did this so I could call the result a BSD license and get that mental "rubber stamp". There were already 4 clause, 3 clause, and 2 clause BSD licenses. Zero Clause BSD was both "just another BSD license" and analogus to the existing CC0.

The reason we need to revive public domain software is the collapse of copyleft. The GPL was a category killer in copyleft, preventing rivals like CDDL from gaining any traction and providing a single giant pool of reusable code under a single license. But there's no such thing as "the GPL" anymore, because GPLv3 split copyleft into incompatible warring camps. Now the Linux kernel and samba implement 2 ends of the same protocol but can't share code, even though both are GPL. A project that's "GPLv2 or later" couldn't accept code from _either_ source, which leaves projects like QEMU that want to turn kernel drivers into device emulations and gdb/binutils processor definitions into processor emulations stuck because they can't take code from both sources anymore. This situation sucks, it's only going to get worse with time (agpl, gpl-next, ubuntu shipping cddl code, maybe GPLv4 someday).

Before this, copyleft was simple and let programmers ignore most of the legal issues around software licensing. We had a universal receiver license acting as a terminal node in a directed graph of license convertibility, and had a simple binary decision: "is this license GPL compatible or not?" If it is, treat it like the one license we're familiar with, if not ignore it. And we're done, we don't have to be lawyers. But with GPLv3, you now have to police all your contributions because "it's GPL" doesn't mean "my project can use it".

Since GPLv3 split "the GPL", a lot of programmers (and companies) categorically refuse to get GPL code on them anymore. Android's no GPL in userspace policy (rewrite of the bluetooth daemon, etc) was a response to GPLv3 destroying "the GPL". Apple similarly froze xcode on the last GPLv2 release of gdb and binutils for 5 years while they sponsored the development of a replacement (clang/llvm), rewrote the smb server, and did a general "GPL purge".

In the absence of a universal receiver license, the next generation of programmers is taking one of two approaches:

1) Refusing to license their code. Not through ignorance, but as Napster-style civil disobedience lumping software copyright in with software patent as too dumb to live and refusing to participate. The next generation is waiting for all those old "series of tubes" fogies issuing DMCA takedowns on youtube AMV's and reaction videos to just _die_ already, and software licensing is an obvious extension of that.

2) Jumping to the other end of the spectrum looking for a universal donor license.

I want to ENCOURAGE the second approach, because today I can't deploy code with no license. But the universal donor of copyright licensing is the public domain, which was the victim of a protracted FUD campaign after copyright was extended to cover binaries in 1983 by the Apple vs Franklin ruling and the resulting shrinkwrap software gold rush competed directly with decades of accumulated public domain software. Commercial interests tried very hard to convince everyone that public domain software was poison, so you'd buy their proprietary software, and this got internalized by people like OSI's lawyer Larry Rosen, who wrote an article in 2002 comparing releasing code into the public domain to abandoning trash by the side of the highway. (No really, see paragraph 5.)

To work around the 30-year FUD campaign against public domain software, people came up with dozens of public domain adjacent licenses (bsd, mit, isc, apache...), which were _almost_ like public domain equivalent licenses except that they required you to copy a specific blob of text into all derived works, and those blobs of text differed from license to license.

This led to a "stuttering problem" where derived works incorporating code from multiple sources would concatenate multiple licenses, which quickly gets ridiculous. The kindle paperwhite's about->license has over 300 pages of license text. Android's toolbox project (the thing toybox is replacing) had dozens of concatenated copies of the same BSD license.

When I asked why they said it's because the copyright dates had changed, and a strict reading of the license meant...

Only public domain equivalent licensing provides equivalent simplicity to what "the GPL" offered. Fire and forget, you don't have to be a laywer, because public domain equivalent licensing collapses together. You can combine code under 0BSD, the unlicense, cc0, wtfpl, or a simple "public domain" dedication such as libtomcrypt's (at the heart of dropbear ssh) and then use any one of those as the resulting license, without stuttering.

With public domain, you don't have to choose a license: you can always change it later. The "should I choose apache or isc or mit" decision paralysis drives people to side with napster-style opting out because it's _not_ universal donor licensing. Add the stuttering problem and it quickly becomes "this is too complex and fiddly to understand, I'm not getting it on me".

I looked at existing public domain equivalent licenses before creating my own, but "the unlicense" is confusing "This code is unlicensed, I can't use it...", Creative Commons Zero is extremely complicated for what it does and has received a lot of FUD (some of which is spillover from various "don't use creative commons licenses for source code, it's not appropriate" campaigns from Eben Moglen and similar). WTFPL has swearing in the name (which turns out to be an issue for some people)...

Zero clause BSD is "more BSD than BSD". It's a very simple story I can tell people to convince them to license their darn code.

This is why I objected so strongly to OSI retroactively renaming this license. There were _reasons_ for 0BSD to be named what it was. Calling the license "free" anything implies an affiliation with the Free Software Foundation putting it on the wrong side of the historical GPL vs BSD divide. I'm trying to convince people disappointed by the loss of a universal receiver license to move to universal donor licensing, so that they don't refuse to license their code at _all_ (which ~80% of github is doing). OSI muddying that message was incredibly frustrating.

> Nobody on my team (or the OSI's board) had ever heard of the 0BSD when
> the FPL was being reviewed

Which surprised me because SPDX had approved it months earlier and OSI had a policy of keeping itself in sync with SPDX. We discussed it on the spdx list, and SPDX published their license approvals.

> so we were all surprised to hear that the
> 0BSD had skipped OSI approval and jumped straight to SPDX for an
> identifier.

When Android merged toybox, Samsung asked me to submit it to SPDX for approval (to simplify Samsung's internal processes), so I did. Nobody ever asked me to submit it to OSI.

At the time I knew that OSI's lawyer wrote the article comparing public domain to abandoning trash by the side of the highway (linked above) and that their FAQ disapproved of CC0, the most prominent public domain equivalent license. And that they had started pushing back against license proliferation years ago, which at the time meant they'd stopped approving new licenses.

> I don't want to rehash all of the issues
> <>
> with the 0BSD, but we're comfortable using the 0BSD identifier on our
> license, regardless of whether the 0BSD is actually approved by the OSI/FSF.

I think the best summary of the issues was actually the timeline I posted.

I'm not the only person to strip down a BSD license into a public domain equivalent license, the John the Ripper also did so. But they used a different starting point (freebsd's license) and came up with a differently worded result. If that one had existed at the time, I'd have used it, but they did that in 2015. (After I relicensed toybox, before I submitted it to SPDX.)

Yet a license with _exactly_ the same wording as 0BSD was submitted to OSI under a different name both after SPDX approved it and after Android shipped it in the M preview.

I'll accept that's all a big coincidence, but OSI failed to do any sort of due dilligence. OSI had a policy of keeping itself in sync with SPDX. Months after SPDX had approved the new license they didn't notice SPDX had already approved this license under its original name (having raised the "but it's ISC" issue during the initial approval process, and accepted the reference to OpenBSD as justification).

Months later, OSI noticed the conflict, but because OSI has no mechanism for admitting it made a mistake, they asked SPDX to change the name of 0BSD. I objected, both explaining the reasons for the name (and why OSI's name was actively counterproductive), and pointed out the timeline (the link above), and OSI's response was basically that I'd convinced them to stop trying to convince SPDX to change their existing decision, but that OSI had no mechanism for ever admitting they'd made a mistake.

> @landley 's position is also clear:
> > I'd really rather ignore OSI entirely than explain that after zero
> > clause bsd had been in use for years, after it had been merged into
> > android and tizen, and after SPDX had published a decision to approve
> > it, OSI randomly accepted the same license under a different and
> > misleading name because this guy said
> > so and OSI didn't do its homework. (Ok, that photo with the caption
> > "this guy" would make an entertaining slide, but entertaining damage
> > control is still damage control.)
> I'm obviously heavily biased, and would prefer not to trample the
> original 0BSD
> <>
> with a modified ISC license,

That's basically a blog post. No software ever shipped with that calling itself zero clause BSD (I know, I searched at the time).

Toybox shipped with this license in 2013, and I explained the strategy behind the name in 2014.

> but when the time comes that we hit 1,000+
> repos we'll be happy to stand behind any decision that's made (the same
> way that we support SPDX in giving us the "0BSD" identifier).

I think this is a good license. I'd like to see more people use it. I think getting the name right is important, and I took the approach I did for specific reasons.

That said, if github wants to go with the John the Ripper license instead, go for it. I don't claim to have invented the idea of public domain equivalent licensing. It's apparently an obvious enough idea that somebody else reinvented about half of it years later.


March 24, 2017

The aboriginal linux mailing list still shows the occasional sign of life despite the project being mothballed for a year now, and when it does I try to point people at mkroot but it's not finished and when they do try it, it doesn't always work for them.

I need to set up a web page and mailing list for that project. It's got a repository but by itself thats not really a project. (And documentation.) And there's the toolchain binary hosting issue I keep poking Rich about, but I need to finish the toolchain build script first, and test them all with kernel builds under qemu...)

March 23, 2017

New month, new instance of gmail disabling half the toybox list's subscriptions.

Of course dreamhost deserves half the blame for making it so I have no control over mailman's settings and making fixing it awkward and horrible and have to be done manually via 20+ pages of web interface. But gmail definitely deserves at least half the blame here, and no _other_ mail service regularly does this. Just gmail.

March 22, 2017

Posted a status update to the j-core list about stuff we went over on my most recent tokyo trip. The tl;dr is "we're in feature freeze for silicon tapeout" (which literally involves writing a test suite bigger than the rest of the code combined), and we looked into implementing the sh3 mmu and it's terrible (far more than doubles the size of the chip) because it used the wrong strategy, and now we're stepping back and going "what mmu _should_ we implement that goes with the rest of the j-core design". Stuff continues to happen behind the scenes (rather a lot of it) but it's not visible to the public at the moment. Working on it...

Oh, and we did some work on turtle manufacturing, which is also blocked by testing (in this case the testing the boards need to undergo after manufacturing; we need hardware and software you plug each one into to verify it's good, including bitstreams to drive said hardware with at least test patterns).

Meanwhile, over in toybox-land, I finally got a fix checked in for Elliott's ps crash (which I broke adding thread support, and I still think my dirtree_flagread semantics are non-obvious but having reviewed them again I can't currently think of a better way to do it).

And we're working out when/where to apply postel's law. (Design issues. Always the hard part. Especially the _small_ ones that are too tiny to get a good grip on.)

March 21, 2017

The San Diego guys just asked (via the recruiter they go through) if I could spend another couple weeks with them. The money's decent (less so when you factor in the travel costs for these short gigs). I really want to ship the stuff I'm working on at SEI, but taking off a couple weeks here and there to refill the bank account a bit seems entirely reasonable.


March 20, 2017

Finally got the paste rewrite checked in. A week or two back somebody posted some paste tweak to the busybox mailing list, including test suite entries, and of course I ran those test suite entries against the toybox one (yay tests!), and noticed it didn't work for even basic stuff. (I suspect this command is yet another holdover from back before the "pending" directory went in. I need to do a full audit of everything at some point.)

Anyway, I spent a largeish chunk of the past weekend rewriting it.

March 17, 2017

I'd like to clarify that I've only started a bug report "Dear Princess Celestia" the one time, and it was several years ago.

March 16, 2017

I just noticed that glibc turned MB_CUR_MAX into a function call. No WONDER the multibyte stuff's insanely slow with that library.

Oh well, I only care about performance under bionic and musl: that glibc nonsense can go hang. Sigh: except musl's doing it now too. Honestly, utf8 parsing is _simple_, that's one fo the big advantages of utf8, why are the C libraries making this so complicated and expensive? Do I need to write inline code for this?

March 10, 2017

Back home, recovering from jetlag.

Blah, I get used to semi-reasonable phone battery life in Tokyo and then I get back to the USA and the stupid NSA listening crap kicks in and kills my battery life again. (If your phone has a "speakerphone" mode why do you think it can't hear that well the rest of the time?)

And people wonder why I have a band-aid or electrical tape over all my laptop cameras when not in use. (Well the guys don't, they gave me this little stick-on camera shutter as one of the speaker gifts. I'm sticking with the band-aid, an idea I got from Val Henson's livejournal, which lets you know how long ago _that_ was. The pad protects the lens for when you do want to use it.)

March 9, 2017

Last day in Tokyo this trip. Jeff and I talked about the GPS stuff and may actually have had a breakthrough: this nonsense makes perfect sense in POLAR coordinates, why are we doing everything in cartesian coordinates?

I didn't make it out to disney or the pokemon center, but I asked if I can get more flavors of kit-kat and Jeff and Pat took me to a "kit kat chocolatier" which is upscale botique store that sells special flavor kit-kats (and nothing _but_ specialty kit-kats) in packs of 4 small sticks for $4 each. (Or big _really_ expensive assortment boxes.) I bought "butter" and "pistachio grapefruit". I didn't get the maple strawberry.

Flying home from Tokyo. It's another one of those "your plane takes off at 9:45 pm, sleep is impossible on United Economy Class, have fun being awake for two days!" dealies. I remember the one time I flew home on Delta leaving at a reasonable hour and implemented most of toybox "ps" on the flight. That was nice. All the flights since then have been sleep deprivation discomfort nightmares to the point where not only can I get nothing done on the flight, but there's a day or two recovery afterwards. But hey, no immediate danger of throwing up this time, so that's an improvement!

So many south-by-south-south people on the actual flight to Austin. Talking about how you can never let engineers and designers talk directly to each other or it'll screw up your project management. It's the dot com boom all over again. I now hate Groupon and whatever this guy's web dating site are on general principles.

Arrived home. United has promised to try to find my luggage over the next few days (their current theory is Denver). The Soup or Shuttle people (I'm picking Shuttle, there's soup at home) have been evicted from their normal counter attached to a wall and have set up an adorable little tent across the hall, due to airport construction.

March 7, 2017

Noticed I fly back thursday evening, not saturday morning. Frantic schedule reshuffle to try to compensate.

March 4, 2017

Another day spent mostly in the hotel.

So last month I figured out how to implement getconf in about 150 lines, but 20 of those are makefile plumbing doing evil things with sed. Finally got that close to being ready to check in.

Long talk with Jeff working out todo list stuff for the week. I now have a general idea why we're not doing the j3 mmu just yet: feature freeze for our ASIC tapeout, and the hitachi mmu design isn't the direction we want to go in (no chance of fitting that mess into an lx25, let alone an lx9; it generally doesn't FPGA well).

The blocker with turtle board manufacturing is testing: we need bitstreams and kernel support to drive all the hardware. We've got to test the serial console, audio out, HDMI out, ethernet, usb, sdcard, and GPIO "hat". (We also need to do a turtle board website.) We need to get that together and make a burn-in plan we can send to the manufacturing guys.

March 3, 2017

A day off! Spent huddling in the hotel. Sometime after lunch (I made it as far as 7-11 for a steamed bun; I keep trying to order "with curry in it" and getting "with sausage puck in it" but eh, close enough) I finally felt recovered enough to fire up one of the computers I brought with me. The mac's at the office (good riddance). The big machine's been suspended since last thursday and the battery didn't last that long, and I need an adapter to plug it in to Japanese power. Netbook it is!

I've decided to revert the toybox cut.c changes, both mine and the other guy's. When reviewing his code I didn't like the approach, and I just haven't got the stomack to look at mine right now either. I want to clear todo items.

Poking at the www directory, starting to tackle the faq.html backlog. It's a todo item that needs doing, and currently that's about the level of focus I have to deal with things right now.

But most of the day went to downloading/reading/replying to a week of email (through pop3; because the combination of thunderbird and gmail's imap assumptions remains problematic, meaning about 15 minutes per ~500 message chunk, and given a week's accumulation of several mailing lists (including linux kernel) that took several hours to download.

Hey, I've been nominated for the Google Open Source Peer Bonus Program! I never filled out the paperwork for the bug bounty stuff, but they found another way. I can no longer say Google's never paid me a dime, because they're offering to send me a $250 gift card! (Woo!) Ok, the link I got to a google doc I'm supposed to fill out doesn't work (sort of 404-ish only google doc's version; might be cut and pasting wrong). But as they say, it's an honor just to be nominated.)

All the naps today. Finally left the hotel for dinner with Jeff and Rich and Pat at a mexican restaurant, where I got off the Ginza line in a part of Tokyo that Jeff insists the Ginza line does not go. I think this means I got a train lost. (I went up an elevator and then couldn't find it again to go back sixty seconds later. Possibly one of them brigadoon things. Oh well, this is why my phone has GPS and google maps.)

March 3, 2017

Everybody comes on the last day of these things. We ran out of handouts (and business cards) printed in Japanese and had to use ones printed in English.

Nine hours of standing in painful shoes. Three hours of frantic booth packing and boxing. Then we went home. There was more to it, but I couldn't tell you what at this point.

My big learning experience for the day (which I already knew) is that my working style and Rich (the sales guy's) working style do not combine well at all under pressure. (He never stops smiling and confidently telling us what happens now. He's often wrong, but never uncertain.) During booth packing, this became... pronounced.

March 2, 2017

Back to "Tokyo Big Site" (really, that's what it's called) for the second day of Smart Energy Week. We stand in the booth. We tell people about our stuff when they ask (generally about 3 times an hour, although sometimes we have two or three groups at once). Then we Stand There Looking Professional.

My suit jacket does not fit. (They didn't have that in Gaijin Size either.)

March 1, 2017

First day of smart energy week and we're _already_ too stressed to function well. Woo!

We got there when they opened at 8am and we had Badge Panic: the paperwork to get our badges was in the booth, we couldn't get to the booth without badges. Jeff threw up his hands that this was unfixable because Japan and went off to sulk (it was a Looooong 3 days) so I started my method of solving this (go up to first line bureaucrat as a supplicant, apologize profusely, be humble, acknowledge that their job isn't to fix this but to tell me who _can_, so attempt to get referred to a manager who can start the exception process and hopefully be gradually escalated to people who can fix things). Alas, this does not mesh with Jeff's approach to bureaucrats (which involves being angry and important and disapproving, and assuming the worst of everybody; he was 100% certain that the manager the door guard summoned would throw me out on the street because That's How They Do That Here). So after that got screwed up I started over at the registration desk, but every time I started to get traction with somebody I was talking to, Jeff would come up and be visibly angry at them and they'd stop trying to help me. (Either of us could have gotten this fixed, but our methods were 100% incompatible. It was not a "good cop, bad cop" situation.) The third time Jeff derailed what I was doing, I sat down and waited for somebody else to fix it, at which point Pat wrote out a new form longhand to get herself registered (including writing a "business card" by hand), and then used that badge to go to the booth to get the stuff we left there yesterday.

Then we left Jeff and Rich and two Japanese men from a partner company running the booth, and Pat took me to the end of the train line to buy shoes at a ridiculously large mall called Lalaport in Toyosu. We went to a place called ABC-Mart which, despite the name, is a shoe store. It sold me a terrible, terrible pair of dress loafers which would be fine shoes if they had them in my size, but the entire store only stocks up to half a size smaller than I wear. But at least I could physically fit these on my feet.

Before this, on the way through the mall, we stopped at a sock store. One which sells expensive custom socks, and that's it. I got a single pair of white socks for $9. (We needn't have bothered because the shoe store had normal socks, but we passed something on the list and went "Socks! Right! Need those.")

Did I mention the mall was enormous? (Over 400 stores.)

I then stood, in painful shoes, for 9 hours. I don't recommend it. (A woman in high heels stood at the corner of an adjacent booth handing out literature for the same amount of time, not showing any pain. I wanted to ask if she had any shoe related advice, but don't speak enough Japanese.)

At dinner Rich the sales guy kept talking about the Dorito. I left the table and sat in the waiting area near the entrance until it was time to go. I do not want to hear what an affluent older white republican male has to say about current politics; they had their say and that's what got us into this mess, they can stop talking now. (But he won't. And he _never_stops_smiling_.)

Tired, stressed, still a bit sick.

February 28, 2017

Setup scramble day 3. We plugged a lot of aluminum and plywood into a lot of other aluminum and plywood using plastic connectors, and then there was a lot of electrical wiring and some plexiglass. Also, velcro comes in tape form, in very large spools: we made extensive use of this.

We're using a sort of modular booth system that they once hired a consultant (or possibly somebody very good with lego) to assemble in a nonstandard way, which they really liked the layout of but DID NOT DOCUMENT. (This might have been at the first Distributech, in florida a year ago?) They're trying to reproduce this layout; the elaborate diagrams in the book describe the standard layout we're not using. Instead they have cell phone photos of the look they want, and we're trying to figure out what pieces go where from those pictures.

One of the bolts needed to assemble one of the tables fell out in shipping, and we couldn't find it, and of course it's a US size you can't get in Japan. (Mail order sure, but not overnight.) Luckily they can dismantle one of the tables back in the office and get an equivalent bolt for the duration of the show.

The graphics order finished early, so we picked it up and stuck graphics on things. (The print shop is next to the office, which is like an hour from the venue by train. After the first day we only took the train _back_, and took a taxi to the venue in the mornings. With 4 people it amortizes out to a reasonable-ish cost.) Some of the graphics are backlit and were printed (at great expense) on transparent plastic instead of the extra-bright synthetic paper used for everything else. I think the synthetic paper is brighter; the ink printed on the plastic is NOT transparent, and it's a solid color background. There's some pretty bright lights behind it but it's not neon or anything. Oh well, learning experience.

At the end of the day we went back to the office to try to get the Mac laptop I brought to update to a version of keynote that can run the presentation they want to have cycling on the tables. (We need four computers to power the four TVs in the booth, and Jeff likes macs so he uses macs for everything.)

Jeff did not believe that I have trouble getting Macs to do everything, and assured me that this was trivial and they're so user friendly and almost three hours later he at least managed to get the new version of keynote on his machine to export the file to the old format the old version we couldn't upgrade my mac from could read.

By "couldn't upgrade" I mean the Apple store wouldn't show keynote in the list of things it could upgrade (even though it was installed), and when we searched for it by name it had an "upgrade" button that just spun a progress indicator endlessly without saying what was wrong, and after we reset my Apple ID password and logged in a fresh time and that didn't fix it he dug through the magic log files only Apple experts know about until he saw that my Apple ID is "not provisioned" whatever that means.

So yeah, after 3 years of me being unable to upgrade it, the mac expert tried to get it to work and the same silent failures hit him too, but Macs are more user friendly because they're what he's used to. Me, I admit xubuntu is terrible and I'm just really familiar with it.

February 27, 2017

Digestion-wise, I am now to the point where I can eat small amounts of food, and then between one bite and the next it turns from food into This Tastes Wrong.

Booth setup, on site. The site being a sort of giant upside down pyramid you reach via a long train ride through industrial wasteland. I didn't know Tokyo _had_ industrial wasteland, but apparently it does! (Fallout from the 1997 asian economic crisis: optimistic construction boom expanding the city out in this direction wound up with a lot of buildings nobody wanted to pay for. Which means the enormous convention center way out here is cheap to run events in, at least by Tokyo standards.)

The _start_ of the commute happened bright and early, meaning we set out at the height of tokyo rush hour. That was an experience. The trains weren't standing room only, they were way more packed than that. I was literally pressed on all four sides by people in black suits, and then when we got off it was almost the same density moving, you could _not_ cut across the flow and just had to sort of follow along until you got to a decision point. The train out to the convention center was less crowded, but we also stopped off at a starbucks for a bit to let the crowd thin out. (They don't have mango black tea lemonade here. They have Matcha Latte but I wasn't up for drinking much of it.).

In the convention center, which is cold and cavernous and could double as an aircraft hangar (and the delivery doors were open so they couldn't heat the place), we got a big square with tape at the corners marking our space on the bare concrete and four large crates of parts. From this, we must assemble a booth. We have 3 days. At the end of the show, we have 4 hours to take it down and pack it away again.

Most of the other booths around us had professional construction crews working since the 3am delivery time. They're cutting plywood with power tools. The one right behind us is welding rebar.

The electricity is under conductive metal conduits (mess up and you zap people 20 feet away), and they gave us a "screw terminal" which Jeff had to cut an extension cord to get bare wires to connect to it. The electricity is of course live while he does this. Somehow nobody's been electrocuted yet. (OSHA has no place in this country.)

Once we measured out where stuff should go and laid extension cords and ethernet cable down in a big backwards Z (dyslexic zorro is our electrician) we waited for the carpet guys to come put carpet over it. And waited. And waited. Marketing Rich told them to come at 5pm because "we need as much time as possible to set up the booth", but carpet is step 2 in our checklist, the booth goes on TOP of the carpet. We can't do anything else until the carpet is in.

Everybody else's carpet is in already. Most of the other booths around us had professional construction crews working since the 3am delivery time. They're cutting plywood with power tools. The one right behind us is welding fancy painted rebar 15 feet off the ground. It's quite impressive.

February 26, 2017

In Tokyo! I am not well. My first attempt to leave the hotel was derailed by an urgent need to turn back around and spend yet more quality time in the bathroom. I haven't eaten anything since Thursday and the idea of food does not appeal at _all_ right now. But at least they have my tea here. No matter how nauseated, my system knows what to do with sweet cold tea with milk in it. I grew up on this stuff, it's digestively reassuring, and several bottles of it did help my system achive throughput, which is progress without being an improvement. (It's now a different kind of bad.)

I do not have the dress shoes and sport jacket I meant to bring for Smart Energy Week. (I forgot. They're back in Austin. Technically Rich-the-manager asked me to bring them to the california convention I wound up not going to, but I should have brought them to this one.)

Finally made it to the office. There's a certain amount of panic about setup for the show. I went with the old reliable "let's make checklists", so we did that. Write down everything that needs to happen, sort it in order, figure out what we can do now and what has to happen after other things that have their own schedules attached.

The tight deadline is because the booth was just used at Distributech in California, and had to be shipped to Tokyo at great expense to get it here soon enough for us to use it for the other show. (I've been registered for Distributech twice but never actually ended up going; all the spam my work email gets is due to that.) The four giant storage cubes of Booth Parts should arrive onsite at 3am.

We needed to go over the graphics files for booth signs and such, so we did that. We needed to shop for booth supplies that need replacing after Distributech, so we did that. And asked Niishi-san to proofread all the japanese text on the graphics and correct it to what native speakers would say.

The engineer who made the graphics isn't japanese, he's chinese, but he's basically an Otaku and has a Japanese wife, and wants to do things the Japanese way. And just as american engineers get promoted into management when they're ready to retire and become useless, Japanese engineers get promoted into marketing (you built it, you know how to tell people what it does) so he very very much wants to be Vice President of Marketing. So that's what it says on our business cards, even though he mostly works out mathematical signal processing algorithms for us (and is quite good at it, but don't tell anyone).

The problem is, when actual marketing needs to occur he refuses to tell people what our stuff can do because he doesn't think they'll believe us, and wants to be "credible". So we went around changing all the "fault location to 10 meters" claims to the more accurate "fault location to 3 meters", and so on. ("But other people's products can't do that!" "Yeah, that's why they should buy ours, and we can do a live demo if they challenge us." Sort of the POINT of marketing? Sigh. As marketers go, he's a really good engineer. Very mathy.)

The mac I brought back needs to be upgraded because it's vintage 2015 software all the way down and since Steve Jobs died you have to give Apple more money annually in order to be able to read newly produced data, so we don't expect it can read the file formats current Apple software versions save. As with many things, this is assumed to be easy so they're leaving it for later. Me, I distrust all things mac at this point. (There are four graphics tables with a big TV under plexiglass, each run by its own laptop. And since it's a mac show they want to use macs, so we're using mine, Jeff's, Pat's, and Rich's mac laptops.)

On the bright side, this mac problem isn't one of the ones where Apple's "remember how the iMac didn't have a floppy drive and that was Steve Jobs being brilliant? Let's remove a feature the hardware's had for years and call that this year's upgrade" nonsense has obviously crippled something. This is just "who needs compatibility" software laziness backstopped by greed.

Jeff is sure the software upgrades in the mac will be trivial and go smoothly, since that's how his universe works, so he keeps putting it off. I fully expect the box to wind up bricked and need hours of forensic spelunking with special tools to have a desktop again because that's how my universe works. We'll see.

February 24, 2017

Travel sickness. 3:30 am alarm for another 5:40 am flight (to SFO this time) and then an 11 hour flight to Japan. Sick the whole way, with that lovely "nausea plus constipation" combo that just refuses to resolve itself either way, and United flights are uncomfortable enough when you're well.

Got in saturday afternoon (9 hour time difference plus almost 24 hours of travel).

Lemme back up: I saw more talks yesterday, went out to dinner with two j-core developers (a tiny little meetup, much fun was had by all and I emailed some ideas they had to Jeff) and along the way I had a Voodoo Donut becuase it's portland and that's what you do.

The donut may have been a mistake.

My redeye on the way in turned into a redeye on the way out because although my flight out of California wasn't a redeye (I checked!), I had to fly from Oregon to California to _get_there_. Luckily I caught it right after the j-core dinner, but not in time to get to bed early. No trains that early, but my airbnb host said he'd drive me to the airport for $20 even at 3am, and I took him up on it. But this meant I only got a couple hours of sleep, and was nauseous when I woke up.

The nausea did not go away. I dunno if it was the donut or the fact I tend to get sick after a week in a new place (the different local bacteria catch up with me), but I had borderline food poisoning for the entire duration of an international flight. On United, which is a stacking debuff although the flight attendants were very nice. I couldn't eat the in-flight meals, and when I tried to at least eat one of the not-pretzel snack bags I gained an understanding of the saying "tastes like ashes".

I don't recommend the experience.

On a related note, thanks to the airline I am now the proud owner of something called a "stroopwafel". (Mint in bag. Well, caramel. Some foxing around the edges.) Maybe, someday, I might be able to eat it. We all have dreams.

You know how I just complained about being too jetlagged to prepare and give a proper talk at ELC (well, to my standards anyway. Busy and distracted contributed too. Yes I'm aware people said they liked it and I shouldn't be too hard on myself but I think I'm going to stop giving talks until my schedule lets up and I can devote enough time to prep work.) Anyway, now I've gotten 2 hours of sleep in a 2 and 1/2 day period on top of the jetlag, having just spent an active convention trying to recover from the previous jetlag the first redeye at the _start_ of the week gave me. I do not expect to be of much use to anybody tomorrow. I tend to perk up in Tokyo, a change being as good as a rest and all, but there are limits.

Bonus fun: the international dateline ate a day so today was Friday and tomorrow is Sunday. Meaning I get one less recovery day before showtime. But for now, I can haz hotel room with Zzzzz in it.

February 22, 2017

Feeling much better today. Saw a bunch more talks.

There are a bunch of BSD-ish licensed embedded RTOS projects going on; the Linux Foundation has decided to push Zephyr (the same way it pushed maemo, meego, yocto, and tizen before it; I assume somebody gave them money) but Google's still doing Fuchsia, Sony added an ELF loader to contiki, and so on.

Walked from the ELC venue (hotel) out to a whatever Kinko's is calling itself this week (8 blocks away?) to pick up the print job the SEI guys prepared for my Turtle board demo, set up the sign, and showed people the board! We haven't got a website set up yet (there's a domain purchased but it's a parking page), but we have a board and it runs Linux and we're preparing to make more.

The attendees seemed to like it. We're committing to do a production run in May, and are accepting preorders, for a definition of "accepting" that's deeply problematic. Lots of people walked off with the preorder forms, which have no contact info for us. (Not even an email address.) And no way to pay us, it says give us your contact info and we'll contact _you_ back, somehow, at some point. (Jen assumed people would read the form, fill it out, and hand it back at the both. One person gave me cash to order the board; that was the only filled out form I got back.)

The problem is we haven't figured out how to take people's MONEY yet. For the big products it's corporate purchase orders, but individual retail turns out to be tricky. The company is headquartered in Canada and the engineers are mostly in Japan, and both jurisdictions attach buckets of regulatory baggage to getting a "merchant account", hoops we haven't jumped through yet, and without which we can't take credit cards. They have a US office, but haven't got a US corporate subsidiary. (This is why I still get paid as a contractor, a "temporary" condition now in its third year. They can't do the insurance and tax witholding stuff until they have a US subsidiary, and two countries is already more than they can handle at current staffing levels.)

February 21, 2017

I no longer try to fly on the same day as I do whatever I'm flying in for (travel eats a day), but I may have to amend that. Redeye flights eat _more_ than a day, I am _out_of_it_ today.

Saw the Device Tree in Zephyr project talk. (Lots of Zephyr talks, I guess that's what the Linux Foundation's overstocked on and trying to push this year. Try the veal!) Nice to see device tree moving beyond Linux, no real solution to the "all those device tree files in the kernel are GPL'd so BSD won't touch it" problem which is why we now have to deal with ACPI on arm. (Bravo guys. Bravo.)

Went to the Embedded Linux Size Reduction Techniques talk which mentioned toybox! (And I felt bad telling them that toysh is crap, but it is. I need to find time/energy to work on that. But not right now, still preparing my own talk material, while attending all these others.)

Then I skipped the next couple talks to finish preparing my OWN talk, and gave it starting at 4:20...

Sigh. Same failure mode as didn't fit the material in the time allotted. And the darn jetlag hangover from yesterday's redeye REALLY SCREWED ME UP: around 4pm I was in desperate need of a nap, but needed to ramp up to be onstage. I caffeinated as heavily as I could, but my eyesight goes all sparkly with visual migraine symptoms if I caffeinate too much these days. Hard to give a talk if you can't see.

Several people told me they enjoyed it, but I has a disappoint. I could have done so much better. I hope the video's at least watchable. I _really_ need to do podcasts of this stuff.

February 20, 2017

My flight to ELC left at 5:55AM, meaning I needed to leave for the airport at 4, meaning I needed to be up by 3, and I wound up staying awake all night.

I don't recommend it.

February 19, 2017

And I will be demoing my turtle board at the ELC showcase thing. Lovely. (I asked them to send me a second, but they can't do it in time. They might get a poster together though. Printed there.)

We need an external push to do preorders to actually manufacture these suckers. The design's been ok for months, we just need a deadline attached to defend it from all the other things with deadlines attached.

February 18, 2017

A couple weeks back The Adelie Linux guys poked me on the #toybox channel on freenode and suggested I look at their getconf implementation, which they're willing to license to me under 0BSD. And I've gradually been poking at it. It's a reasonably clean implementation, if you're willing to have an #ifdef staircase for every simbol. Which I am not.

So I went through all the symbols (in 3 categories) and confirmed that the names given on the getconf command line are mechanically transformable (via regex) into the symbol names pulled out of limits.h and unistd.h. And I can get that list of symbol names with:

gcc -E -dM - <<< '#include <limits.h>'

So I need to do things with sed/awk/grep to generate a new header that defines all the symbols (to the "undefined" value if necessary) and add that to the plumbing.

First is coming up with a sed expression that'll parse the C source file to create a header containing an array of either the symbol that's #defined in the header or an UNKNOWN flag (probably -1) so the C doesn't need 5 lines of:

#ifdef SYMBOL

For half the symbols. (It's icky and it's why I haven't done getconf before now.)

There's a few stages of this. One is rewriting the C files so the symbols are just:

char *limit_vars[] = {

And then come up with a sed line to extract those as normal unquoted strings one per line. (And do the prefix mangling that transforms them into the expected symbol names.)

Then I need to use that list to turn the first list (from gcc -E -dM) into a list of #defined symbols in the same order as the string array, so I can use the string array position to index the symbol array.

Of course doing this is trickier than it seems. I need to substitute values preserving order, the tool for which is sed. I don't want a shell for loop iterating through symbol names and calling sed each time; that would make the build really slow. So I have a sed invocation creating another sed invocation, which possibly violates the geneva convention. (I'd have to check.)

The tricky bit is coming up with a sed command line checking for each symbol and outputting it, and outputting an alternative at the end if it _hasn't_ been matched, while preserving order. ("at the end" meaning the end of the sed script, not the end of the symbol list input.) Grep can remove things, but not replace them maintaining order.

February 17, 2017

Canceled my return flight from Portland, instead work's flying me to Tokyo.

Since I can't leave until after ELC, I get like a week's notice this time! Woo! (Luxury.) Of course I was already panicing to get everything done before ELC...

In THEORY mkroot can do everything Aboriginal Linux can in like 1/5 the code. (Factor out the toolchain and the rest simplifies greatly.) In practice, making it look easy takes lots of up-front work. I've been trying to get the actual code to the point I can do a release before doing the talk material explaining how to use it. If this _was_ my $DAYJOB I'd probably be in reasonable shape. As is... I can cut a release from the conference, right?

Working on it...

February 13, 2017

It's so easy to fall behind on blogging, but I promised in my patreon I'd try to keep up with it.

I've been poking at my ELC talk presentation materials for a while now. I need to get a toybox release out, talk Rich into a musl-cross-make release, do a mkroot release dependingon both of those, and then I can write the actual presentation using all of that.

The presentation is growing a bunch of branches and it's hard to sequence. I want to cover "make defconfig + hello world initramfs", and why it's only simplest from a certain point of view. I want to cover "hello world on bare metal". I want to cover the Linux boot sequence. I want to walk through the miniconfig symbol list and show what the vital ones DO, and then what the important but optional ones do...

As with most of my presentations, figuring out the SCOPE is hard enough, then the sequencing, then the timing. I don't consider a topic something I can really talk about when I have less than a half-dozen hours of material on it. This time I have a 2 hour tutorial, but it should be interactive not just lecture.

Sigh. What I should really do screencasts with associated audio and do youtube tutorials. I probably need video editing software for that though. (And deadlines to force me to _do_ it. :)

February 7, 2017

Wanna see something creepy and orwellian?

A recent change to "National Industrial security operating Manual (NISPOM)" requires requires Department Of Defense contractors to establish and maintain an "insider threat program", under which "Reportable Items" include alleigance to united states, foreign preference, sexual behavior...

Of course this includes anything space related, and stacks on top of the ITAR export regulations that killed the US space program (because the crypto panic of the 1990's rubbed off on the space program when Intelsat 708 blew up in 1996, and then the space side didn't get relaxed when the crypto side did).

So now if you buy a screwdriver at home depot and use it to turn a screw on a spacecraft, that screwdriver becomes a munition that cannot be discussed with non-us persons (I.E. do not mention it EXISTS on the internet), AND you're subject to full McCarthy witchunt loyalty snooping inside the bubble.

This puts the guy who owns SpaceX joining the Alleged President's circle of advisers in a new light, doesn't it?

February 6, 2017

Have I ranted about the new dmesg api? Because it's terrible. And now we're stuck with it. (Even though the documentation still says "testing", and presumably continues to for the forseeable future.)

So this guy (now one of the senior contributors to systemd) came up with a new /dev/kmesg API, and if you "cat /dev/kmesg" it hangs at the end. (So you have to open it O_NONBLOCK to get the _old_ behavior. That's just beautiful.) And that's when it doesn't spontaneously fail with "invalid argument" because your read() buffer is too small:

$ dd if=/dev/kmsg bs=110
6,30825,1057265865113,-;cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 3000 mBm), (N/A)
6,30826,1057265865118,-;cfg80211: (57240000 KHz - 63720000 KHz @2160000 KHz), (N/A, 4000 mBm), (N/A)
6,30827,1057338815561,-;cfg80211: World regulatory domain updated:
6,30828,1057338815576,-;cfg80211: DFS Master region: unset
dd: error reading '/dev/kmsg': Invalid argument
0+4 records in
0+4 records out
335 bytes (335 B) copied, 0.000563614 s, 594 kB/s

What is the magic "always big enough" value? It's 8k. Yes, they have an arbitrary limit that's _not_ page size, so "cat /dev/kmsg" may _seem_ to work but it's not reliable. If you implement something like cat using page size it'll work in your testing and then magically fail later in the field.

Except they made sure read was unreliable for other reasons, such as "the buffer moving under you" can return EPIPE. Isn't dmesg a ring buffer? Yes it is, but they can't make that work reliably without EPIPE. Don't we already have EAGAIN, an existing errno that tells libc to retry the read? Yes, but they didn't use it because systemd guys care nothing for what came before, or consistentcy with the rest of the system. They value what they pulled out of their ass, and somehow Linus merged this.

This is just such an amazingly well designed API its author should receive some kind of award, at a very high speed, possibly aimed at his head. And of course it's a new API that lives alongside the old one but doesn't share state; SYSLOG_ACTION_CLEAR is ignored, now you lseek SEEK_DATA...

So yeah, Elliott sent me a complete rewrite of dmesg because he wants the new --follow option, and it has to be based on this new nonsense the kernel guys did because adding a couple new klogctl() entries would be too hard. It's not like you could do a SYSLOG_ACTION_RINGREAD that read from a ring buffer position to the end of the ring buffer (with -1 meaning "wherever it currently starts"), never returning more than SYSLOG_ACTION_SIZE_BUFFER bytes so you can reliably size bufp, and if bufp is NULL return the current ring buffer end position (which wraps at the same SIZE_BUFFER value), then the other one would be SYSLOG_ACTION_RINGWAIT that blocks until the current ring buffer end position != the one you pass in.

But no, that would be too disruptive, instead the kernel guys got a whole new /dev node that's SO well thought out. Sigh.

February 4, 2017

Sigh, ps and top remain fiddly.

(Update: really fiddly.)

February 3, 2017

A problem with replacing Aboriginal Linux with mkroot is that the musl-cross-make native toolchain doesn't include make, bash, or distcc. Hmmm. I haven't written my own make yet, and that's a big one that's not in toybox's roadmap to the 1.0 release because at the time I was thinking it belonged in qcc, but I'm not doing qcc any time soon. I can do distcc as an overlay (it's optional anyway, not needed for chroot/container mode) and I _do_ have a proper bash replacement shell in the toybox todo list (as basically the last item, although it could get bumped higher).

Meanwhile in toybox, oneit.c is broken (when I taught xopen() never to stomp stdin/stdout/stderr I forgot to switch it over to xopen_stdio(), oops), and I should probably use returns_twice instead of noinline for the XVFORK() stuff.

Backstory: musl-cross-make's vfork is broken, it seems to be an interaction between musl and current gcc where the compiler doesn't know vfork can return twice (just like setjmp), and thus it "optimizes" stack usage, as in function calls reuse bits of the stack that store local variables that the compiler's liveness analysis thinks we're done with, but which we're NOT done with if you can longjmp() back to the earlier part of the function (which vfork does when the child exits).

The compiler is supposed to _know_ this, it's CLEARLY A BUG. But whether it's gcc's bug or musl's bug is not entirely clear.

I'm now on my third attempt to fix it. This is one of those "Rich has very strong opinions about how people should use his code and intentionally breaks ways of using it he doesn't agree with", ala "things like dropbear built against musl donn't work on nommu becuase he provides a broken but existing fork() so the ./configure stage doesn't know to use vfork() instead and there's NO WAY to tell at compile time except maybe checking for #ifdef __FDPIC__ which doesn't help binflt or static pie".

Meanwhile I'm over here hitting it with a rock until it works.

February 2, 2017

Working on musl-cross-make and the mkroot kernel stuff. I've got an script that tries to use musl-cross-make to build cross compilers for all the targets musl supports, but there's fiddliness (cortex-m is nommu but arm doesn't have fdpic support yet, so it only builds static PIE which is a lot less efficient).

Not entirely sure where to post this. Try to convince Rich to merge it? (I'm trying to convince him to host the binary tarballs it produces as output.) Put it in the mkroot repo? Hmmm...

February 1, 2017

Starbucks emailed me a free drink coupon for my birthday, and I like hanging out there to program. (With the big headphones to drown out Zombie Sinatra.) If you get a mango black tea lemonade in the "egregious" size (um, alegretto?) it's 50 cent refills as long as you hang out, and the 9 cell battery is still working in my netbook.

Alas, I was too busy running errands to make it there, so the coupon expired. Wound up trying to get work done at the big machine instead.

The big machine (halfbrick) is the 8-way SMP i7 laptop I got from System76 back in 2013. It still works fine, but is not as portable as the netbook and I generally try to have one "master" machine that I can rsync to the others without worrying about integrating diverse changes. But halfbrick is waaaay faster and I'm trying to get musl-cross-make toolchains for all the targets. This means I'm collating my various toolchain build snippets into a single "" that iterates through all the known targets and builds both static cross compiler and native compiler versions of each. (First building a dynamically linked i686-linux-musl cross compiler that it can build those other statically linked cross compilers with. This means the i686 target gets built 3 times.)

My goal with this is to get toolchains built how I like them, test them all, and then make puppy eyes at Rich to cut a musl-cross-make and host binary tarballs of toolchains built with this script. (Hence script must be portable and reproducible. Or at least demonstrate the build invocations I want to Rich so he can do it his way.)

Alas, Rich is really busy and hasn't got much time to coordinate on this. I've asked him to debug some wierdness I've seen, but in terms of actual project design work, musl-cross-make is too far down his todo list at the moment.

January 31, 2017

Trying to get a toybox release out. So many todo items, but they can't all fit in this go.

I'll probably just do a "ship what I've got" thing at some point. Until mkroot is ready I don't have a proper test environment to build Linux From Scratch under, I suspect there will be lots of stuff to fix when I finally get that connected back up.

January 30, 2017

Apparently I am _not_ flying to Distributech this week.

On Saturday Rich (the company president, not the developer) said they'd need me as a warm body to help run SEI's booth at Distributech in california, and they were going to fly me there either today or tomorrow. But today, word is we're not doing that. This is the second year in a row they registered me for distributech, and there is SO MUCH SPAM related to this. (Going to my work email instead of my personal one, but still.)

January 29, 2017

I called Jeff and had a long talk with him, and I just can't leave SEI right now. The money's terrible but I've spent two years working on this technology and I want to see it SHIP darn it.

Apologized profusely to the colorado recruiter. I've been dragging my feet about filling out the paperwork for days, and I'm experience enough at being me to spot when I'm trying to tell myself something.

Sigh. When Jeff visited last week he said he needed a couple more weeks to resolve the funding stuff. Under normal circumstances this wouldn't be any of my business, but "when can I go back to a full-time salary" is kind of my business.

January 27, 2017

Hanging out at the Starbucks in the domain, poking at mkroot, trying to come up with a related kernel build design.

Right now it makes a root filesystem, which is target-agnostic but just uses the supplied toolchain to determine what it's building for. But kernel builds have a .config, which I assembled in Aboriginal Linux using target-specific information appended to a generic portion. And then there's creating the script which has qemu command line arguments, kernel command line arguments, and the serial console. Plus the location the resulting kernel binary lives varies all over the place. (It would be lovely if all qemu targets had the ELF loader hooked up so I could always "-kernel vmlinux", but no. Several want arch/$ARCH/boot/*[Ii]mage but it's not that simple because arm builds BOTH "Image" and "zImage"... Wheee.)

The recruiter paperwork was a giant PDF that renders as a one page ad for Adobe. I'm not sure "For best experience" is an accurate description of "nothing but our proprietary package can render this at all". It's 2017, are there still websites that only render in Internet Explorer?

They sent me broken up files, but I can't view all of _those_ either.

So I swung by Kinko's (it's been Fedex Office for years, but if I say that nobody knows what I mean) assuming their windows machines could print this out, but when I opened the envelope the original "one big PDF" had only printed 5 pages. The broken up version had more attachments than that. So this stuff doesn't obviously work for a professional print shop, either.

Since verbally agreeding to do the new Colorado job, I've gotten 5 emails from other recruiters, it's apparently recruiter season again. But what I really want to do is go back to a fulltime salary with Jeff's company, which alas is not under my control. (I gave them 7 months already.)

January 26, 2017

Somebody asked on the buildroot list if I was going to resubmit the patch to add toybox support to buildroot.

I haven't yet for a couple reasons: first busybox is deeply integrated into buildroot and replicating that for toybox is a pain, secondly I don't really care that much about buildroot so it hasn't bubbled to the top of my todo list.

Then a buildroot developer asked about the difference between busybox and toybox and I answered his question, and his reply was that nothing I said was relevant to buildroot.

I'm trying to make Android a better base for embedded systems. I don't see how buildroot is relevant to this. So I guess we're in agreement.

January 25, 2017

Supporting date %s turns out to be unreasonably hard because strptime is stupid. Instead of returning a unix time (I.E. number of seconds since midnight at the start of Jan 1, 1970), it returns a broken down "struct tm" which cares about crazy things like timezone and day of the week, and then you feed that into mktime() to get it back into unixtime. Meaning if you feed %s to strptime, even if libc understands what to do with it, it adjusts the fields for local TZ nonsense, and then when you convert it back to unixtime with mktime() you can get a DIFFERENT RESULT. (And don't get me started on mktime() which returns -1 for errors which is a VALID RESULT if you care about representing historical dates (which is why linus insists making time_t unsigned is not the way to fix y2038.)

This struct tm normalization nonsense is why Elliott added "chkmktime()" to date, which is what's breaking here: if you convert the time to unix time and back and it's different, barf. Possibly we should just bounds check the fields that posix says have ranges. (Except what I want to do is treat them as an array, and I _can_ on every libc I've checked, but there's nothing actually _requiring_ it. So I have to open code a loop. Sigh.)

As for supporting %s in date, in theory I could just go "if strcmp(s, "%s") atol(blah);" but your pattern could be "date -D 'Timestamp: (%s)'" where it's surrounded by arbitrary context. In theory %s is the only escape when it occurs, because it would stomp all the others. (I suppose you could specify timezone, but if unix time has been consistently UTC since the 90's, only reason it ever wasn't was dual booting with windows that wanted the hardware clock set to local time and yes, adjusted each daylight savings transition.

January 24, 2017

My friend Nick got herself in legal trouble, so I sent $300 to a bail bondsperson in Arkansas. I'd like to help more but I'm resource constrained.

Speaking of which, a recruiter offered me a new job in Colorado. Hmmm. It's the same pay rate Cray was offering (which is $15/hour more than $DAYJOB paid back when it was fulltime), and it's running a debug lab which would be a nice change of pace from what I've been doing (and gives a nice work/life split where I could work on toybox stuff without guilt in my time off).

And I've learned that "length of commute" is an enormous ingredient in my job satisfaction: the Cray gig was lovely in part because my apartment was something like 600 feet from work. San Diego had a 15 minute drive when there was no traffic (I.E. never).

Hmmm... Hard decision. I really really really really want my current $DAYJOB to work out, I love the technology, I love the people. But the money's been terrible for a while now. It was supposed to resolve in October, and didn't...

January 23, 2017

Somebody asked about windows binaries for toybox, and my reply was same as always, "I don't do Windows".

This was one of the FAQ entries in my old Aboriginal Linux project, and I still haven't ported over all the old busybox FAQ material I wrote.

Somebody else (Christopher Barry) asked me about appending to a CPIO file, specifically for mkinitramfs doing multi-stage output. It's an interesting question: there's a "TRAILER!!!" entry at the end (for historical reasons, and yes that's in-band signaling) but it's fixed size and can be trimmed off. You'd have to decompress and recompress the gzip wrapper, but that's not a huge deal.

Possibly I should add this to my toybox cpio.c todo list, but I've already got a bucket of stuff there for cpio: mostly adding a new data type with 64 bit timestamps, file sizes, xattr storage, maybe allow sparse files...

Alas to make _that_ useful, I'd need a corresponding kernel patch, and the kernel developers have disappeared so far up their own collective ass it's a long walk to get their attention and I really haven't bothered. (I have patches to make CONFIG_DEVTMPFS_MOUNT work for initramfs that I haven't pushed for most of a year now. Well, I sent a quick stab at it to lkml last year which got immediately shot down, and I created a cleaned up version and then never bothered to send it because dealing with those guys is just no fun.

Sigh. I should hold my nose and try again. But not today.

January 22, 2017

Jeff (my boss) was in town today, and we got to hang out and program (while he waited for somebody to arrive on a delayed flight so he could go back to the management side of the force; there's a theme here).

We got something called OpenDNP3 working on a Turtle board, which is a protocol electrical utilities care about due to Standards. As usual Jeff did 95% of it, but I got him unblocked 3 times when a thing didn't work and I hit it with the appropriate rock.

Jeff is a lot like batman. He has a day job as Bruce Wayne and it's really time consuming, but he's way more effective than any of us when he wanders into the programming side of the world. He just has to do so surreptitiously so the investors don't find out about it.

January 21, 2017

Fun morning. The "TAXI" button at the university accomodation made the intercom dial a taxi service, and I made an appointment to have them pick me up at 6am. When no taxi arrived I pressed the button again but it kept hanging up halfway through the conversation (or possibly the person on the other end was hanging up). The text above the button said it was maxi taxi, with a number, so I called that with my cell phone and they said they haven't had anything to do with the univeristy in a year, and gave me the number of another taxi service that also said they weren't involved but were happy to send me a new taxi.

So I got a ride to the airport, and at the end of it neither of my cards would work in their machine (I've only been able to use them at ATMs here) and the ride was more australian cash than I had on me (lady couldn't accept american dollars), so I ran around the airport looking for an ATM (nope), and went back and eventually we worked out that chip and sign would work (chip and pin wouldn't). Have I mentioned I hate the chip they put in new cards?

Anyway, by the time I got to the Virgin airlines counter, they said the plane would be taking off in 8 minutes and the doors were already closed and I could give them an extra $120 australian dollars to be on the next flight to melbourne which doesn't land until after my connecting flight takes off.

So I bought another ticket on Tiger Air, which could get me to Melbourne with an hour to make my connecting flight, which I was assured was impossible but being stuck in Melbourne seemed better than being stuck in Hobart. They were very nice and put me in the front row so I could get off the plane quickly, and I ran through the airport to baggage claim (no they can't transfer bags between airlines, why would you think that?) and eventually made it to my plane (through checkin and security theatre and customs and all that) right as they were closing the doors.

I am seriously, _seriously_ out of shape. Ow.

I continue to be unable to get work done on United flights. Something about their economy class is designed to prevent concentration. Delta yes, united no. Not entirely sure why.

January 20, 2017

LCA is over. I have to catch a plane at 7am, so should probably go to bed early.

Some great panels all week, I need to go over the container internals tutorial for toybox.

I'm also going through the giant heap of pending toybox issues. I've added to it this week, of course (I need to go over the container internals tutorial for example). I still have a (December 30) patch from Elliott that I haven't applied yet, because it got fiddly and wound up on the todo list.

The real problem is I don't have a test case, and my attempts to make one ran into problems with the "date" command. I just ran into _another_ problem with the date command (wheee!) which I'm writing a message to the list about.

Sitting in the cafe uphill from the dorms the conference put us up in, it's doing a bunch of 80's music, including "Angel is a Centerfold" which is a song I find CREEPY. The singer never established any sort of relationship with a woman, she moved on with her life, went into presumably quite lucrative modeling work, and he's freaking out with some sort of ownership claim. There's a verse fantasizing about tracking down this woman he hasn't seen in years to take her to a motel room and rape her because she posed naked. "My blood runs cold, my memory has just been sold..." No, not yours. She is not your property? "A part of me has just been ripped..." Dude, you weren't even _involved_. Would he similarly be flipping out if he found out she'd gotten married (or died), or would that be ok with him?

Now somebody's turning Japanese (at least they think so). Much better song. (And now he'd walk 500 miles, only in a different accent.)

January 19, 2017

I gave my talk! The room was _enormous_ and intimidating. The schedule says it seats 650, and it was maybe half full for my talk, which I'm told is an excellent turnout for non-keynotes. (They just about fill it up in the mornings when nothing's scheduled against it.) I don't usually get intimidated speaking anymore, but this time I had outright butterflies and I think it showed in my presentation. :(

The talk appears to have been generally well received (update: the video is up and the outline is still there too). But I think I could have done better. I'm happy with my 2013 toybox talk, but I spent like a _month_ working on it before I gave it. (Seriously, I was working on that outline a week before the talk, posted them a day before the talk, and had time for a full run-through in the hotel the night before.)

This time I was editing right up until it was time to start, and hit Failure Mode #1: more material than I had time for. I knew it, and was rushing, but I made it like halfway through the outline before looking up and seeing the "2 minutes left" sign. (Very bright lights in my eyes, I missed the earlier signs.) So a less than graceful dismount, and I didn't get to half the material at all, nor time for questions. Sigh. (People came up after, but it wasn't recorded.)

Of course I didn't get my outline to Jen in time to turn it into slides (alas). I could scp the outline up to my site every time I tweaked it, but she needed a lot more turnaround time. Not a huge surprise, I usually present from an outline and a browser with lots of tabs I can point to primary sources with. This time the URL of every tab I wanted to show was in the outline, in order. (That was a failure mode in my 2013 OLS talk, I thought video was being recorded and it was only audio. Still beats the 2015 Linuxcon Tokyo talk where nothing was recorded.)

Over the years I've learned I need a special kind of outline to speak from: not enough detail and I'll go off on tangents that screw up the pacing and sequencing. Too much detail and I'm just reading text at people, which is boring. (I can craft _articles_ that way, but that's not how you do a talk.) So I had to get the sequencing right (which took days but I eventually was ok with the general scope and flow of the thing), get the level or detail in the material right (I'm happy I did that), and include the right _amount_ of stuff for the time available... which I screwed up again. Getting that right involves multiple practice runs and tends to be my failure mode. (If I can't blather about a topic extemporaneously for several hours straight and remain excited about what I'm saying I don't know enough about it to present on it, or have a high enough interest level to be an engaging presenter. But I need to pick the most interesting _subset_ of that to fit in the time, and when lots of it's interesting that's a judgement call.)

Spent the rest of the day sort of fried. Convention's still going on, lots of panels, but I went back to the room and took a nap with 2 panels left to go in the day, then went out to dinner with a bunch of Red Hat guys.

January 18, 2017

Saw several good talks. I should do writeups fo them, but there'll be videos online. (The guy doing the videos is the same guy who did the HDMI talk I watched last week.)

There was a speaker's dinner last night. Awful lot of speakers at this conference. Built-in conversation opening too, "so what's your talk on?" Beautiful venue. The "dinner" part was kind of silly though, the kind of fancy food that has such small portions you barely get anything to eat.

My talk still isn't ready. I mostly know what I want to cover, but I need so much more editing to have a _chance_ to fit it in my timeslot. (I've been getting up before sunrise due to jetlag, and using that time to spend a couple hours each day at talk prep. I've also had my netbook with me at the conference and been editing in some of the talks I've attended. Getting closer, but the talk's tomorrow. One more round of this and then I gotta Do The Thing...)

January 17, 2017

So talks. Much conference. Wow.

I may have buried the hatchet with Bradley Kuhn. We were both at a session on GPL compliance where he found out about my "promote public domain, make android self-hosting" agenda (I.E. the 2013 talk) and seemed surprised by it, and he explained that his goal for GPL enforcement wasn't to get code into projects but to get build and install instructions for hardware. Which is one of those "the reality of the embedded space is far more complicated than you seem to think" things. We talked over lunch.

I also saw the Open Invention Network lady again (I need to sign up toybox to the patent pool; aboriginal linux is in there already), and I talked to a couple people from OSI about their conflict with SPDX over thename of Zero Clause BSD; I think they understand why I'm upset now but have no procedure to amend a previous decision. I should try to follow up with them but my todo list runneth over...

Elliott submitted a Microcom implementation (serial terminal) to toybox, which we've needed for a while. I was going to do one with netcat and stty, but factoring the shared infrastructure out into lib/ makes more sense.

Alas my attempt to clean it up fizzled out because the shared infrastructure I have so far doesn't quite match up with what microcom needs.

The lib/net.c pollinate() stuff does a poll loop between stdin/stdout and a device, but it does a unidirectional shutdown: stdin closing calls shutdown(2) on the network socket (the (2) convention means manual section 2, ala "man 2 shutdown"). In the other direction the network socket closing exits the poll loop which goes on to end the program. But serial connections don't half half-connections to shutdown, because they don't propagate close state across the serial device. (They could with the data terminal ready line, but they don't because that used to signal modems, back when there were modems between the serial connections. You can also send a break, but nobody ever does and it's unclear what it would mean. I think the kernel may implement magic sysrq using that on the console?)

My first thought was just reverse input/output so a close on stdin (such as the terminal closing) exits the program, but hotplug can close the serial connection (when a USB adapter gets yanked) which should exit the program. So either side exiting should close the program. There's also break and exit key logic in the loop, which pollinate doesn't do. Possibly it needs some kind of callback to preprocess stdin input and respond appropriately? Hmmm...

The tty_sigreset() function isn't quite what this wants either, because that only resets stdin, not the remote side. It yanks us out of "raw" mode (raw mode being attached to the tty not the program is a historical thing that's really kind of sad now; it should be a process attribute but is an I/O device attribute, these days mostly attaching to _virtual_ I/O devices (ptys) provided by your terminal program and passed from shell to child process and back with unwanted state sticking to it like dirt). Anyway, it doesn't reset the terminal speed back to where it was before you opened it. Do we want to? (Do we care? No idea.)

Also, "terminal speed" _only_ applies to serial lines, so having the network version care about that seems useless. What else is going to use serial lines? I should implement stty and _maybe_ pppd someday if somebody asks for it? Most things probably won't.

So once again a burst of work I wind up backing out. There _is_ cleanup to do here, such as making it recognize all the supported Linux serial speeds. Under the covers it's almost always setting a divisor register for the serial clock speed, so you can set the speed to almost arbitrary values, but the kernel adamantly refuses to expose that to userspace and instead has a dozen canned values it allows, adding more periodically until it filled up its assigned bits when it hit 4 million bps. That's been stable for well over a decade now, so I can just loop over an array of the values and fill out the darn bitmask myself.

Oh, and half the USB serial converters out there are hardwired on the serial side and ignore the speed you set on the USB side. So testing this is going to be fun. I have USB serial on Turtle and Numato boards with me; turtle's hardwired 115200 on the console side (controlled by the kernel) and ignores the speed you set on the USB side (because it's a packet protocol, the modulating/demodulating's already been done on the other side of the converter chip and it's just bytes in USB packets on this side). Numato's the same except hardwired to 9600 (and you need a windows executable to reflash the magic chip that controls that; Numato doesn't provide a Linux binary to do that) so even though the two FPGAs are clocked at the same speed the Numato _seems_ way slower interactively.

Zero chance I can fit that level of detail into my talk. which I need to give the day after tomorrow and have maybe half an outline for. So much collating...

January 16, 2017

Greetings from Tasmania! I've been awake for 2 days!

Between the international dateline and the 14 hour flight, I missed sunday entirely. Got to the airport (most kwaj-like airport I've seen since kwaj), collected luggage, caught a bus to the event venue (Wrest Point, which is not a military academy but a casino attached to a boat dock), and attended several panels! I have not seen a single fire-breathing spider yet but they may be limited to the mainland. I had dinner with some people, whose ear I talked off because sleep deprivation and caffeine combine to put me in lecture mode. (I apologized repeatedly, but they kept listening. *shrug*)

I told them about my talk material, because they asked and that's what my brain's full of right now. Mostly I told them about stuff that probably WON'T make it into the talk. I need to edit my talk into some sort of coherent order, I half several dozen "and you should know about this!" that don't logically connect in any way at the moment. But nothing does right now because sleep. I'm in a dorm room. It's very sleep. (The conference-provided housing is at the University of Tasmania dorms. It's summer here. A very cold summer, but it's pretty close to antarctica here. Tasmania is Australia's Canada.)

I bought a power adapter at the airport which does not convert voltage, and I'm pretty sure plugging in my laptop won't fry it but since I didn't run the battery down on the flight I can wait until morning to find out.

January 14, 2017

Yay sleep. Feeling almost coherent again.

Got to the airport with 2 hours until my flight takes off and they wouldn't check me in for the flight without an Australian visa. (Tasmania is part of Australia. Who knew?) My response was: I need a visa? Japan didn't need a visa. Canada didn't. Russia did but that was 7 years ago and they made a _very_ big deal about it and I had to drive to another city weeks ahead of time to visit an embassy that had pamphlets walking you through the procedure for bribing police. (They randomly demand bribes and you need to recognize when they're doing it and know the correct amount, according to the official embassy pamphlets.)

Google's first page of australian visa places were all third party firms wanting a large amount of money to handle the process and offering a multi-day turnaround time, but after about half an hour the airline guys dug up the correct website to get it directly from the australian government ( which cost $10 and took 5 minutes. So I made it on the plane.

Still, it would have been nice if the conference organizers had provided this information earlier. Finding out _at_ the airport is not a good way to handle that sort of thing.

January 13, 2017

Spending a day hanging out in San Francisco with Jeff on my way to Tasmania for LCA. I did less than 1/3 of the stuff my talk's covering, Jeff Dionne and Rich Felker each did as much. (I already got an email from Niishi-san with some bullet points.) So at the last minute, I'm picking Jeff's brain to write 300 lines of notes so far, out of which I need to assemble a talk. (We also did a conference call with Rich so I could get his answers to some stuff, that's part of the notes.)

Covering this material in the allotted time's going to be fun, but I've got most of a week for editing.

My sleep schedule's already crazy: my flight out of Austin took off at 5am this morning (I didn't know that was even an option) so my ride to the airport left at 3 meaning I was awake all night. I've got tonight in a Ramada Inn and then more time with Jeff tomorrow (because the person he was going to meet this evening missed their plane and won't be in until tomorrow afternoon)

And then tomorrow I spend 14 hours on an airplane to Hobart, Tasmania for LCA. Although first there's this layover at LAX, home of the famous "lax security" you've heard so much about. I have zero chance of getting anything done on the plane (United Economy Class leaving at the end of the day), and I can't even sleep on united unless I get a 3-seat row to myself which has only happened once so far. So that's gonna be fun. Maybe I can get some programming in at the airport before the flight.

January 12, 2017

Trying to get a toybox release out before my week of international travel starts, but unfortunately the past 6 months of craziness have stuck me in one of my failure modes: 8 gazillion half-finished things, most of which are hard problems.Getting interrupted in the middle of something is bad because reverse engineering my own half-finished code is more work than writing it was in the first place. Sometimes it's so bad it's easier to throw away what I've done and start over, but after I've done that a few times on the same thing (such as the "dd" command) I forget what state the current code's in, and my ideas get tinged with "no, I already tried that" and I have to sort through what turned out to be a bad idea when I tried it and what just never got finished multiple times because I was interrupted.

The other problem is I have a half-dozen things to work on but they're all at a tricky stage where I have to make some sort of decision or work through some problem, none of which is fresh in my head. So it's _frustrating_, I do the reverse engineering work staring at my diff, figure out where I left off and why, and then it's a hard problem. And there's lots of them stacked up all throughout the tree, and especially if I don't have time to build up a good head of steam and do the darn cleanup, I'm just going to make it WORSE by meddling a bit and then leaving myself with yet more unifinished work to reverse engineer later.

When work has _nothing_ to do with toybox, it's not so bad because I can do a clean separation and say "here's a 2 hour block in the morning/evening I can poke at this". And when work lets me spend large chunks of time on toybox, I can do that too. But when work _used_ to let me spend time on toybox but now says "no no no, GPS is all that matters", and doesn't have a strong fixed schedule (telecommuting for people in enough different timezones there's never a time when _somebody_ at the company's not awake), then I feel GUILTY about spending any time on toybox when they've told me not to. Because I _should_ be doing GPS, which I am so burned out on there's no words for it.

There's a related kind of overwhelming where I should be working on 37 different things, such as the j-core website and kernel paches and userspace build system and talk prep so on, so that no matter what I do I'm ignoring something else that I _should_ be doing... That's also a bit paralyzing. Not as bad as the "GPS is your whole world, rub your nose in the burnout!" stuff, but they stack.

I'm tired of perpetual crisis. We are now in month 7. What I should really be doing right now is packing a suitcase for my flight tonight.

January 9, 2017

Coming back to shell parsing after years away from it, and wow it has a lot of conflicting needs. The parser needs to be recursive so $() works, but it needs to be able to add multiple entries to the current command line so "$@" works. A line can have multiple commands ala "a && b; c" but a command can span multiple lines, ala:

echo ab"cd$(echo "hello

And yes the output of the $() is spliced into the same argument as the abcd/efgh. But:

$ X="A B C"; for i in $X; do echo -$i-; done

Separate arguments. The quoting is necessary to keep the argument together:

$ printf '(%s)[%s]\n' $(echo -e "one\ntwo")
$ printf '(%s)[%s]\n' ab$(echo -e "one\ntwo")cd
$ printf '(%s)[%s]\n' "ab$(echo -e "one\ntwo")cd"

The quotes inside the $() are not the same as the quotes outside the $(), meaning the quoting syntax nests arbitrarily deep. And $() subexpressions are _not_ executed immediately, if you type "echo $(echo hello >&2) \" the stderr output doesn't emit until you hit enter a second time (because of the \ continuation). So you're queueing up a sort of pipeline but instead of the pipe output going in sequence some of it turns into argument data, and yes I checked: "set -o pipefail; echo $(false); echo $?" returned zero.

There are sort of several logical scopes in quoting: you need a chunk based version that can get multiple lines (-c "$(echo -e "one\ntwo")" or a whole mmap()ed file), a logical block version that runs to the next & | && || ; command separator (newline is _sort_ of like ; but a semicolon is just a literal in quotes). As for what they MEAN:

$ echo one && echo two || echo three && echo four 

Which implies that || anything returns an exit code of _zero_ (success!) when the previous thing was false. And of course:

$ false && echo two || echo three && echo four

So && anything returns nonzero when disabled. So in the "does not trigger" case, || anything acts like && true, and && anything becomes || false.

Oh, and command input has to be aware of what the data means to do the $PS2 prompts and for cursor up to give you the right grouping. Hmmm... Functions and for loops store up snippets they then repeatedly execute with different variables substituted in, and an if statement is a variant of that. But beyond that, just knowing when to prompt for the next line and when to run what you've already got:

$ echo one; echo \
> two

There has to be a syntax parsing pass separate from the execution pass in order to know when prompt for more data (or error out for EOF). Which raises the question "what happens if a file ends with \" and the answer is it's ignored (or an empty line appended), and yes this includes sourcing a file. (Under bash, anyway; only really testing bash. Don't care what the Defective Annoying SHell does.) How about blocks crossing scopes?

$ cat thing

source walrus
echo there
$ cat walrus
if true
  echo hello
$ ./thing 
walrus: line 4: syntax error: unexpected end of file
./thing: line 5: syntax error near unexpected token `fi'
./thing: line 5: `fi'

That seems a bit unambitious, doesn't it? Same for the way "echo -(" seems like it should work without quotes, and yet it doesn't. The ( doesn't mean anything there, but the shell freaks out anyway. I'm aware that ) is meaningful there and doesn't need to be offset by whitespace (ala "(ls -l)" works, although "(ls -l) echo blah" doesn't), but ( is only meaningful midsentence out of solidarity? Or am I missing something?

January 8, 2017

Went down the street to get a can of tea last night and I got recognized by somebody in a car stopped at the light. As in they called out my first and last name, then explained they saw a Linux talk I gave in San Francisco. (Later emailed to invite me to dinner with their co-workers on tuesday.)

Weird but cool. (I've been variants of "internet famous" ever since I wrote stock market investment columns read by ~15 million people back during the dot-com boom, and I'm used to being recognized at conventions. But this is the first time it's ever happened in my civilian identity. :)

(And of course as soon as I have some sort of plan that requires me to be in Austin, two hours later I get a text from work wondering if I'm free to go to Japan on the way to the Tasmania trip. Or possibly San Jose, they're not sure yet, but either way it would probably involve me leaving... tuesday morning. If it happens. Tickets still aren't booked. Oh well, pack a suitcase and see what happens. My fault for making plans, of course.)

Alas I can't put nearly as much work into prepping the talk as into the ELC talk, because the linuxconf one mostly isn't _my_ material. I did maybe 1/3 of it, the rest is from Jeff and Rich and Niishi-san and so on. I can talk about toolchains and root filesystems and the uclinux triage and websites ( and and mailing lists and such, and forward porting the initial kernel port to then-current vanilla, and so on. But making the hardware faster, making the software better, adding SMP support and improving the I/O devices... I was THERE for it, but wasn't the one doing it. (Ok, I added cache support to the kernel. Lots of research for a tiny patch, but there's 5 of our 45 minutes.)

Then again my talk prep problem is usually "how will I fit what I want to say into the time allowed, I could blather for hours and hours about this without repeating myself, gotta _focus_ on just the best bits, what _are_ the best bits anyway..." This is at least new problem, although it's not "how do I fill the time" so much as "yeah I can talk about X but Y would be so much more interesting to cover except I'm not the domain expert there". Ok, "stuff I already know" is much less interesting to me than "stuff I dunno yet", so there's some bias in there. But still...

Spending a day with Jeff to prepare the talk would be nice. Of course, skype also exists and is significantly cheaper. I wonder if Rich is recovered enough from his house fire to do skype yet? He's back online but kinda busy, and even before all this he didn't want to travel to Tasmania to talk himself. I haven't wanted to bother him until he resurfaces, but in theory I get on a plane at some point soon. It would be nice to know when.

January 7, 2017

Poking at toysh but there's a scope issue. I know that monday I stop being able to work on anything useful and have to do all GPS all the time forever, so as I'm triaging the shell todo list my brain is just bouncing anything I'd still have to be working on monday, because it'll be a week before I can look at it again and it'll go all fuzzy and I'll have to run through everything again to get the level of clarity I need to implement it.

I'm probably also kind of tired, but weekends are the only time I can do real work instead of spinning my wheels on GPS. (I am dutifully staring at that window from time to time. I don't write about it here because it's not _accomplishing_ anything. But... dutifully staring. Doing GPS won't fix the company's funding issues, and the opportunity cost of NOT doing toysh right now is enormous. But... dutifully staring.)

January 6, 2017

Trying to write a README for the j2 kernel build, which is also related to the February ELC talk, and I've got a conundrum. "make j2_defconfig" sources a 42 line arch/sh/configs/j2_defconfig file, but the resulting miniconfig file (which is everything you'd have to switch on starting from allnoconfig to recreate that config; this does _not_ include symbols set by dependencies) is 152 lines.

So what are the differences? Let's rip the defconfig symbols out of the miniconfig:

egrep -v "^($(sed 's/=.*//' $DEF | tr '\n' '|' | sed 's/|$//'))=" $MINI

And the result is 114 lines of stuff added to the defconfig: container namespaces, three different ipsec transport modes (with no help text describing them in menuconfig). SLUB_CPU_PARTIAL has help text though, and it says you want to disable it on realtime systems. How do we switch that _off_ in the defconfig?

It's got 8 gazillion NET and WLAN VENDOR symbols which are just menu guards. (Because visibility and enablement are gratuitously entangled, these symbols are there to visually group options, but then they become dependencies for those options. And yet they're still enabled if nothing depending on them is switched on, even though by themselves they have no impact on the build.) And yes, looking at this CONFIG_NET_CADENCE should clearly be CONFIG_NET_VENDOR_CADENCE but nobody's edited that config line since it went in 6 years ago (commit f75ba50bdc2b was November 2011).

CONFIG_SCHED_MC: you can enable SMP but not use a multi-core scheduler. How does that work? No idea. It has an AT keyboard and PS/2 mouse support enabled.

Hmmm... the real question is how much detail to go into for the talk. What I should do is show people how to use / search in menuconfig to find a symbol and look up its help text (why you can't look at the help text FROM the / menu, or jump straight to the symbol from there, I have no idea; yes writing my own kconfig is on my todo list but it's a big complicated thing that requires keeping the problem space in your head all at once, I.E. not easily done in 15 minute chunks so not something I can work on now).

Anyway, point is to establish the difference between defconfig and miniconfig, and show that "simplest" can have different definitions depending on what you're optimizing for. (Miniconfig is most explicit and enables the least amount of stuff. Defconfig is more automated, but that means it does stuff behind your back and enables things you didn't ask for.

And a brief digression into "you think there's no work to be done here? I submitted miniconfig mode to the kernel a dozen years ago and they pulled the "do buckets of unrelated work to placate me or this won't go in" crap which I didn't so it didn't (I usually just resubmit a year later and go "Is whoever was gatekeeping gone yet?"); this guard symbol nonsense happened since, the magic special casing of CONFIG_EMBEDDED happened since... Oh and I should definitely mention the half-decade I spent removing perl as a build dependency and similar amount of time the squashfs guy spent trying to get his thing in, and maybe segue into how "Signed-off-by" is part of the layers of bureaucracy that's grown up around an aging project...

I don't worry about people stealing my ideas, it's far more work for me when they _don't_.

January 5, 2017

My talk "Building the simplest possible Linux system" got accepted to ELC (in Portland in late February). This is not work-related, and I'm paying my own way to this one.

This talk is basically on mkroot, which means I need to wean it off of busybox between now and then. Because "simplest possible" isn't going to have two userspace packages, and if I _can't_ I should show them busybox instead of toybox, which would be sad.

The simplest self-hosting system is (conceptually) 4 packages: toolchain, kernel, libc, cmdline. For a leaf system drop the toolchain, static link and handwave the libc as part of the toolchain (why Ulrich DrPepper was wrong about static linking), and replace cmdline with "app" which can be hello world. To run the result, you need a runtime (board or emulator), bootloader (qemu -kernel, system in ROM), and root filesystem (rant about 4 types of filesystems: block backed, pipe backed, ram backed, synthetic).

Hmmm... I need to explain defconfig files and miniconfig ("here is a 20 line kernel config file where every line means something"). Architecture selection and cross compilers. Booting vmlinux under qemu, bootloaders, and different binary types. Root filesystems (initramfs/initmpfs vs initrd vs root=). Demonstrate a kernel launching "hello world" and talk about why PID 1 is unique, walkthrough oneit.c and switch_root vs pivot_root. Directory layout (why LFS died, why posix is useless, and why /usr happened...). Walkthrough of a simple init script (and I should resubmit my CONFIG_DEVTMPFS_MOUNT patch that makes it work for initramfs). Probably a little on nommu systems (fdpic vs binflt, logically goes in the part where I describe _what_ the ~20 config entires in the kernel miniconfig do, and what differing target configs look like ala the aboriginal linux LINUX_CONFIG target snippets... And of course device tree vs non-device tree, dtb files and the horrible bespoke syntax du jour format that grew rather than was designed for device trees (and how its documentation is chopped into hundreds of little incoherent files in the kernel source, using a license that ensures BSD and such will never use it, which is why windows extended ACPI to Arm...)

Oh, kernel command line options (supply them, find them in the docs, find them in the source), how unrecognized name=value arguments become environment variables (which are not stack/heap/data/bss/mmap, see /proc/self/environ... Really "how linux launches a process". Which means do a quick ELF walkthrough: text, data, bss, heap, stack, that stupid TLS crap (basically a constant stack frame with its own reigster). Also #! scripts. Dynamic vs static linking: fun with ldd and readelf, the old chroot-setup script...)

Really if we're doing "simplest possible" I should demo hello world on bare metal and gluing your app to u-boot. Because as soon as you add "Linux" your overhead goes up by a megabyte. (Anecdote about Norad's Cheyenne Moutain display running busybox because they had to audit every line of code they ran it and it was much easier than auditing gnu/crap.)

But first, I need to get the toybox shell basically usable. I have a month and a half. Hmmm... (And the last file in "make install_airlock" that's not on my host system is ftpd, because there's no standard host reference version that can agree on a command line syntax.

January 4, 2017

At Texas Linux Fest last year I signed up for the Austin Checktech mailing list, and they've emailed out volunteer opportunites every month or so since then. I have not been in town for a single one of them yet, because of the travel required by my "we can only afford to pay you half time for 6 months now" $DAYJOB. The upcoming one is a design workshop on January 16, during the Tasmania trip. (And I still don't know if I'll be back to drive Fuzzy to her fencing tournament on the 27th.) I love the people, I love the technology, but this is getting old.

I got the first pass of ftpget checked in last night. (Yes there was one in pending, but I was 2/3 of the way through a new one before I noticed.)

Running my own code under strace, glibc's getaddrinfo() call is doing insane amounts of work. It's opening a NETLINK socket, doing sendto() and a bunch of recvmsg() and then opening a second socket to connect to "/var/run/nscd/socket". I'm testing against "", there is no excuse for this. I tried adding a loop to my xconnect() function to do an AI_NUMERIC lookup first, and only try a non-numeric lookup if that failed (and if the user requested it), but it's still doing all that crap for a numeric lookup. (You can tell from the string if it's a numeric address.)

You wonder why there's 8 gazillion weird security holes every year? Oh well, hopefully linking against musl and bionic isn't this broken.

Sigh. I got ftpget finished and checked in, but haven't done the test stuff for it yet. Instead I did a "git diff" on my tree to see what kind of other fires I left burning, and here's a typical example:

$ git diff toys/*/dmesg.c | diffstat
 dmesg.c |   73 +++++++++++++++++++++++--------------------
 1 file changed, 38 insertions(+), 35 deletions(-)

I.E. "I was working on a thing and got interrupted. Again." This was something I did in San Diego last month, in reponse to Elliott sending me basically a complete rewrite of dmesg for a new API that Kay Sievers crapped all over the kernel. It is a very, very bad API.

But I can't work on it now. I need to go do GPS stuff for $DAYJOB instead. Endless, forever GPS stuff. (Because rtklib is only half an implementation, Andrew Hutton's code is GPLv3 so we can't use any of it, and basically everything else parses the output of an integrated on-chip solution that doesn't give us the data we need.)

January 3, 2017

The on-again off-again Tasmania trip looks on again? They're trying to involve another trip to Japan, which I'm ambivalent about. As much as I adore hanging out there I can't make plans (Fuzzy wants me to drive her to a Fencing tournament 15 minutes from home on the 27th: will I be able to? I have NO IDEA...) and I'm starting to develop a precursor to vericose veins from all the long plane flights, and my thighs object to adding an extra 12 hour leg between Japan and Tasmania. (United and Greyhound have equally uncomfortable seats.)

Still, yay talk. I've never spoken at this conference before, it's an honor to be accepted, and I'd really like to be able to do this.

Yesterday turned out to be a day off, which I found out when nobody but Rich was on the 5pm call. I texted jen and found out it was a holiday since New Year's was on sunday... at the end of the work day. So I'm taking today as that day off since I did GPS stuff monday. (Not in a hugely effective manner but it still counts.)

Trying to finish up ftpget, which is a thing that busybox apparently dreamed up, or at least there's no standard for it I've been able to find. I've been using it forever because it's a really easy want to script sending a file from point A to point B, and it's a thing I need for toybox make airlock/mkroot to do its thing with the full QEMU boot and control images and all that: the QEMU image dials out to an FTP server on the host's loopback and does an "ftpput" of its output files. This is the simplest way I've found of sending a file out through the virtual network, or from a chroot to a host system. Yes there are like 12 other ways, and I'd happily have a virtual filesystem be the way if qemu had a built-in smbfs server. Alas it has virtfs, which combines virtio with 9p (both turn out to be full of rough edges), using it requires QEMU be built against strange libraries on the host (it's just extended attributes, why would you need libcap-devel and libattr-devel) and then the setup is crotchety and I should probably revisit it someday, but my own todo item there is doing a simple smb server in toybox. So for the moment: implement ftpget.

The ftp protocol is only moderately insane, as in you can carve a reasonable subset out of it and ignore the rest... until you try to make cryptography work. Alas, even in "passive" mode you still have to open a second connection (http's ability to send the data inline in the same connection was apparently a revolutionary advance), which is why masquerading routers have to parse ftp connections, and don't ask me how sftp is supposed to work. But for sending files one hop on a local network, it still works.

(An aside: I should clean up and check in my netcat mode that prints the outgoing data in one color and the incoming data in another color, either to stdout or to a file. It's very useful logging. Doesn't quite do it for FTP because there's that whole second channel data goes along, but it lets you see what the control logic actually looks like going across the wire. But THAT's tied up with the whole "stop gcc's liveness analysis from breaking netcat on nommu in a semi-portable way" changes, which boils down to either "move everything to a function" or "move everything to globals", or possibly I can just hide it in XVFORK() which would be nice...)

The main problem with ftpget/ftpput is there are several other things you need to be able to do, which ftpget has no provision for, and I'm not adding: ftpls, ftpls-l, ftprm, ftpmv, ftpmkdir, and ftprmdir. Instead, I want to add flags to ftpget, which implies that ftpput is also an ftpget flag (with the command name being a historical alias with a different default action).

This gives me rather a lot to test, which raises the "how do I script these tests" issue. This test is tricky because I need netcat and ftpd to test ftpget. I need netcat because I've been testing against busybox ftpd (it's there, and no two ftpd command lines seem quite the same) and that only works on stdin/stdout.

This means "make test_ftpget" requires ftpd and netcat to be in the host $PATH, and the right _versions_ of each. (The toybox versions, not whatever weird host versions might have their own command line syntax. Busybox has two different ones for netcat. Back before I relaunched toybox and was trying to get back into busybox development this is one of the things that drove me away again.)

Sigh. I should start todo list just for items that would be a full-time job for somebody. A real Linux standard that documents what things do (a role Posix abdicated in favor of continuing to host Jorg Schilling, and the Linux Foundation abdicated in favor of accepting huge cash donations from Red Hat and Microsoft). A j-core+xv6+musl+toybox course teaching people C programming, VHDL, and both processor and operating system design. Where "course" is probably a 2-4 year program. A "hello world" kernel like I complained about years ago...

Anyway, back to ftpget...

January 2, 2017

In lieu of getting to spend the week between christmas and new year's catching up on toybox and mkroot, I got a 3 day weekend.

(Ok, I explained how my normal working style involves round-robining between different tasks so that when I'm blocked on one I can make progress somewhere else, and that I can focus past this to hit deadlines because the deadline is its own safety valve letting me know when I can _stop_ caring about this topic, but after the deadline I usually need extra recovery time and in the absence of deadlines a round-the-clock push that never ends is called a "death march". And that after three weeks focusing round the clock on GPS in tokyo including evenings and weekends, two more weeks in san francisco, and having GPS looming over my head as "the priority" the rest of the time while I was putting out other fires, being told the tuesday between christmas and New Year's "You don't get vacation and you can't cycle to lower priority things for recovery time, you will work on GPS and nothing else until further notice"... Yeah, that's building up to the kind of "I can't stand to look at this code, it viscerally disgusts me" that made me leave BusyBox. Yes it's a failing, but after a couple decades of doing this I know my limits. I haven't actually had a _vacation_ since I joined this company over 2 years ago, and since November have been in Tokyo, Austin, Minneapolis, San Franciso, and San Diego, with a trip to Tasmania pending (but not yet funded).)

Still: I managed to beg Friday off and took the weekend, and got a _little_ caught up on toybox and mkroot.

My mkroot project is happening because Aboriginal Linux died. I introduced the idea of ending the project here, and announced the mkroot stuff here, but finding time to work on it's been hard.

The problem is the toolchain packages have been frozen on the last GPLv2 releases for years because I'll never voluntarily ship GPLv3 binaries in a hobbyist context, and nobody regression tests against those versions anymore. In the course of 2 kernel releases last year they broke the build on 4 different architectures (for 4 different reasons). It wasn't anything I couldn't fix, but it was more than I could keep up with in the time I had available, and I fell far enough behind catching up was more work than it was worth. I'd been meaning to switch to a new (llvm/clang) toolchain for ages, other people did their own toolchains, and when Rich did his own musl toolchain builder, I went "sure, let's use that one".

But taking the toolchain build out of the project meant there wasn't enough left to justify the rest of the infrastructure (such as the download and package cache stuff extracting and patching tarballs), and I did a really simple root filesystem build that fits in one file (building a usable toybox-based root filesystem, using musl-cross-make, in a single 300 line shell script), and went "huh, how much of the rest is actually needed"?

I ported the stuff to the toybox build as a new "make install_airlock" target, the environment variable sanitizing is more or less a single "env -i" call with a half-dozen variable whitelist...

I'm still not quite sure where to host it: at first I had it attached to the j-core account (back when $DAYJOB let me spend time on it and it was of use to them), then I put it on my github, but really the idea is to build a simple initramfs and I should probably just add a "make install_root" target to toybox. But to do that, I need to clean the busybox dependency out of it, which comes in two parts:

The script is downloading and compiling busybox, with a config file that results in:

bunzip2 bzcat bzip2 gunzip gzip hush ping route sh tar unxz vi wget xzcat zcat

Toybox's new "install_airlock" has a $PENDING command list it symlinks out of the host $PATH (grep PENDING= scripts/ currently containing:

bunzip2 bzcat dd diff expr ftpd ftpget ftpput gunzip less ping route tar test tr vi wget zcat awk bzip2 fdisk gzip sh sha512sum unxz xzcat

The mkroot list is a subset of the airlock list, but eliminating the mkroot list lets me merge into toybox. That said, the airlock list is what's necessary for a "hermetic build", which is of interest to the Android guys. (Ok, it's just the tip of the iceberg for what they need, but still.)

Of the 15 commands in the mkroot list, I have good implementations of bunzip2, bzcat, gunzip, and zcat already. (They're needed as busybox builtins due to the way busybox tar works.) I can do bzip2 and gzip reasonably easily (I did most of bzip2 a decade ago, the problem is the string sorting plumbing is just sort of a heuristic I never understood well enough to write my own; gotta dig into the math of the various sorting approaches and understand why the fallbacks trigger that way) but am not sure bzip2 compression side is actually necessary? (It's obsoletish. There's no xz compression side logic either but I'd probably just want to do lzma without the instruction-specific compressors there, if that's an option). The ping, route, and tar commands are cleanups/rewrites of stuff in pending, as is xzcat/unxz (but that's a way bigger cleanup, that _does_ need the architecture-specific instruction compressors to deal with existing archives). The reason I haven't finished gzip yet is it's not clear where the dictionary resets should happen (nobody quite agrees, "every 250k of input" is probably reasonable; using SMP for compression/decompression is related to this). I've wrestled with wget a bit already and will probably just end up rewriting it.

The really hard commands are hush/sh and vi. I don't strictly need vi to build, although that one's not hard (just elaborate). But you can't build without a shell. And _what_ you build with the shell is... squishy. Unclearly defined. I need lots of scripts to run through the shell to see where the behavior diverges and fix it, but I haven't built Linux From Scratch under hush either, so... (Sigh. Probably set up a chroot with bash, automate the current LFS build under that, and use that to make toysh tests).

So I need to tackle toysh in order to merge mkroot into toybox "make install_root", which means it has to live on its own for a while. :P

But in the meantime, I think I've gotten most of the aboriginal linux plumbing to the point I can ignore it. There are some nice bits I'm not reimplementing (mainly the package cache stuff), but not stuff I actually _need_ and it was always annoying to try to explain it anyway.

And after all that: what I actually spent the weekend banging on in toybox was mostly ftpget. (I noticed there's one in pending after I was halfway done writing a replacement. Sigh. It's one of the three things "make install_airlock" complains the host hasn't got, because my host is ubuntu not busybox.)

January 1, 2017

I aten't dead.

Last January 1, I mentioned that "update the darn blog" was one of my patreon goals, and in December it got hit! (Woo! Ok, I bumped the goal amount down a while back, but it got hit!)

I posted a patreon update explaining how my blog got constipated this time. I suppose I should explain it here as well (although that one has links).

I'm still riding a startup down (something I swore I'd never do again but I love the technology and the people). Since they haven't been able to pay me a full-time salary since June, I eventually took a side gig doing a space thing to refill the bank account, then found out why not only will China and India get to mars before the US will but Vatican City probably will too: ITAR export regulations! Yes, the same insanity that (back in the 90's) meant openpgp/openssh/openssl were developed in canada and germany and could only be downloaded from non-US websites, but you weren't allowed to upload them _back_ out of the country because the US government said that was exporting munitions. This caused US cryptographers to move overseas and give up their US citizenship because otherwise they couldn't work in their chose field.

It turns out this insanity was extended to the US space program in 1996 when we sold some crypto hardware to China on the condition that they couldn't examine it, just shoot it into space. Armed guards followed it to china, where they launched it on a rocket that exploded, and the hardware was never recovered. The resulting scandal extended ITAR to the whole space program. As my boss explained to me (the Friday of my first week there), "If I buy a screwdriver at home depot, it's just a screwdriver, but once I use it to turn a screw on a spacecraft it's now a munition and cannot be discussed with non-US persons".

As far as I can tell, this is why the US no longer has a space program to speak of (Commander Hadfield is Canadian), and why people like me don't want to get any of it on them. (It was _really_ fun for me still doing evening and weekend work on projects for a canadian company with most of its engineers in Japan, and maintaining Android's command line utilities as my main hobby project. Yeah, not a comfortable position. I know where there "proprietary vs non-proprietary" lines are, but this ITAR crap? That's "covered with spiders" leve of get it off me.)

This is why I stopped blogging, unsure what exactly I could say until I'd disentangled myself from that job (which took a while), and then I was out of the habit and way behind... (This method of bloging still has the problem that I can't post things out of order. I can _write_ them out of order, but the RSS feed generation plumbing is really simple and I have a personal rule of not editing old entries, even though it's just a text file I compose in vi.)

So, new year, new file. I'm still riding the startup down. Originally this was supposed to be until our next round of funding in October, but that came and went and I'm still paid half-time (but expected to work _more_ than full-time) without even a new deadline where the "funding knothole" might resolve. Lots of travel (which they pay for, but don't reimburse my receipts anymore). Wheee.

One of the big reasons I enjoyed this job so much is they used my open source projects, but recently they've switched to "you need to do closed source GPS correlator software to drive our patented hardware, to the exclusion of all else", and for several months I haven't had time/energy to advance toybox much. They even yanked christmas break out from under me. So, not sure how much longer that's going to last...

(The _most_ awkward part is I proposed a talk at LCA on their technology, but no tickets have been booked, we haven't had time to prepare talk material, I can't do the talk by myself and I'm not paying my own way to Tasmania to do it. It's an honor to be accepted and if I was going to cancel I should have given them a full month's notice, but I still don't know if this talk is actually going to happen. Or if it's going there and back or bouncing off there to go to Tokyo for a third multi-week intensive focus on the proprietary GPS stuff that I'm pretty burnt out on these days.)

Back to 2016