Rob's Blog rss feed old livejournal twitter

2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2002

January 9, 2017

Coming back to shell parsing after years away from it, and wow it has a lot of conflicting needs. The parser needs to be recursive so $() works, but it needs to be able to add multiple entries to the current command line so "$@" works. A line can have multiple commands ala "a && b; c" but a command can span multiple lines, ala:

echo ab"cd$(echo "hello

And yes the output of the $() is spliced into the same argument as the abcd/efgh. But:

$ X="A B C"; for i in $X; do echo -$i-; done

Separate arguments. The quoting is necessary to keep the argument together:

$ printf '(%s)[%s]\n' $(echo -e "one\ntwo")
$ printf '(%s)[%s]\n' ab$(echo -e "one\ntwo")cd
$ printf '(%s)[%s]\n' "ab$(echo -e "one\ntwo")cd"

The quotes inside the $() are not the same as the quotes outside the $(), meaning the quoting syntax nests arbitrarily deep. And $() subexpressions are _not_ executed immediately, if you type "echo $(echo hello >&2) \" the stderr output doesn't emit until you hit enter a second time (because of the \ continuation). So you're queueing up a sort of pipeline but instead of the pipe output going in sequence some of it turns into argument data, and yes I checked: "set -o pipefail; echo $(false); echo $?" returned zero.

There are sort of several logical scopes in quoting: you need a chunk based version that can get multiple lines (-c "$(echo -e "one\ntwo")" or a whole mmap()ed file), a logical block version that runs to the next & | && || ; command separator (newline is _sort_ of like ; but a semicolon is just a literal in quotes). As for what they MEAN:

$ echo one && echo two || echo three && echo four 

Which implies that || anything returns an exit code of _zero_ (success!) when the previous thing was false. And of course:

$ false && echo two || echo three && echo four

So && anything returns nonzero when disabled. So in the "does not trigger" case, || anything acts like && true, and && anything becomes || false.

Oh, and command input has to be aware of what the data means to do the $PS2 prompts and for cursor up to give you the right grouping. Hmmm... Functions and for loops store up snippets they then repeatedly execute with different variables substituted in, and an if statement is a variant of that. But beyond that, just knowing when to prompt for the next line and when to run what you've already got:

$ echo one; echo \
> two

There has to be a syntax parsing pass separate from the execution pass in order to know when prompt for more data (or error out for EOF). Which raises the question "what happens if a file ends with \" and the answer is it's ignored (or an empty line appended), and yes this includes sourcing a file. (Under bash, anyway; only really testing bash. Don't care what the Defective Annoying SHell does.) How about blocks crossing scopes?

$ cat thing

source walrus
echo there
$ cat walrus
if true
  echo hello
$ ./thing 
walrus: line 4: syntax error: unexpected end of file
./thing: line 5: syntax error near unexpected token `fi'
./thing: line 5: `fi'

That seems a bit unambitious, doesn't it? Same for the way "echo -(" seems like it should work without quotes, and yet it doesn't. The ( doesn't mean anything there, but the shell freaks out anyway. I'm aware that ) is meaningful there and doesn't need to be offset by whitespace (ala "(ls -l)" works, although "(ls -l) echo blah" doesn't), but ( is only meaningful midsentence out of solidarity? Or am I missing something?

January 8, 2017

Went down the street to get a can of tea last night and I got recognized by somebody in a car stopped at the light. As in they called out my first and last name, then explained they saw a Linux talk I gave in San Francisco. (Later emailed to invite me to dinner with their co-workers on tuesday.)

Weird but cool. (I've been variants of "internet famous" ever since I wrote stock market investment columns read by ~15 million people back during the dot-com boom, and I'm used to being recognized at conventions. But this is the first time it's ever happened in my civilian identity. :)

(And of course as soon as I have some sort of plan that requires me to be in Austin, two hours later I get a text from work wondering if I'm free to go to Japan on the way to the Tasmania trip. Or possibly San Jose, they're not sure yet, but either way it would probably involve me leaving... tuesday morning. If it happens. Tickets still aren't booked. Oh well, pack a suitcase and see what happens. My fault for making plans, of course.)

Alas I can't put nearly as much work into prepping the talk as into the ELC talk, because the linuxconf one mostly isn't _my_ material. I did maybe 1/3 of it, the rest is from Jeff and Rich and Niishi-san and so on. I can talk about toolchains and root filesystems and the uclinux triage and websites ( and and mailing lists and such, and forward porting the initial kernel port to then-current vanilla, and so on. But making the hardware faster, making the software better, adding SMP support and improving the I/O devices... I was THERE for it, but wasn't the one doing it. (Ok, I added cache support to the kernel. Lots of research for a tiny patch, but there's 5 of our 45 minutes.)

Then again my talk prep problem is usually "how will I fit what I want to say into the time allowed, I could blather for hours and hours about this without repeating myself, gotta _focus_ on just the best bits, what _are_ the best bits anyway..." This is at least new problem, although it's not "how do I fill the time" so much as "yeah I can talk about X but Y would be so much more interesting to cover except I'm not the domain expert there". Ok, "stuff I already know" is much less interesting to me than "stuff I dunno yet", so there's some bias in there. But still...

Spending a day with Jeff to prepare the talk would be nice. Of course, skype also exists and is significantly cheaper. I wonder if Rich is recovered enough from his house fire to do skype yet? He's back online but kinda busy, and even before all this he didn't want to travel to Tasmania to talk himself. I haven't wanted to bother him until he resurfaces, but in theory I get on a plane at some point soon. It would be nice to know when.

January 7, 2017

Poking at toysh but there's a scope issue. I know that monday I stop being able to work on anything useful and have to do all GPS all the time forever, so as I'm triaging the shell todo list my brain is just bouncing anything I'd still have to be working on monday, because it'll be a week before I can look at it again and it'll go all fuzzy and I'll have to run through everything again to get the level of clarity I need to implement it.

I'm probably also kind of tired, but weekends are the only time I can do real work instead of spinning my wheels on GPS. (I am dutifully staring at that window from time to time. I don't write about it here because it's not _accomplishing_ anything. But... dutifully staring. Doing GPS won't fix the company's funding issues, and the opportunity cost of NOT doing toysh right now is enormous. But... dutifully staring.)

January 6, 2017

Trying to write a README for the j2 kernel build, which is also related to the February ELC talk, and I've got a conundrum. "make j2_defconfig" sources a 42 line arch/sh/configs/j2_defconfig file, but the resulting miniconfig file (which is everything you'd have to switch on starting from allnoconfig to recreate that config; this does _not_ include symbols set by dependencies) is 152 lines.

So what are the differences? Let's rip the defconfig symbols out of the miniconfig:

egrep -v "^($(sed 's/=.*//' $DEF | tr '\n' '|' | sed 's/|$//'))=" $MINI

And the result is 114 lines of stuff added to the defconfig: container namespaces, three different ipsec transport modes (with no help text describing them in menuconfig). SLUB_CPU_PARTIAL has help text though, and it says you want to disable it on realtime systems. How do we switch that _off_ in the defconfig?

It's got 8 gazillion NET and WLAN VENDOR symbols which are just menu guards. (Because visibility and enablement are gratuitously entangled, these symbols are there to visually group options, but then they become dependencies for those options. And yet they're still enabled if nothing depending on them is switched on, even though by themselves they have no impact on the build.) And yes, looking at this CONFIG_NET_CADENCE should clearly be CONFIG_NET_VENDOR_CADENCE but nobody's edited that config line since it went in 6 years ago (commit f75ba50bdc2b was November 2011).

CONFIG_SCHED_MC: you can enable SMP but not use a multi-core scheduler. How does that work? No idea. It has an AT keyboard and PS/2 mouse support enabled.

Hmmm... the real question is how much detail to go into for the talk. What I should do is show people how to use / search in menuconfig to find a symbol and look up its help text (why you can't look at the help text FROM the / menu, or jump straight to the symbol from there, I have no idea; yes writing my own kconfig is on my todo list but it's a big complicated thing that requires keeping the problem space in your head all at once, I.E. not easily done in 15 minute chunks so not something I can work on now).

Anyway, point is to establish the difference between defconfig and miniconfig, and show that "simplest" can have different definitions depending on what you're optimizing for. (Miniconfig is most explicit and enables the least amount of stuff. Defconfig is more automated, but that means it does stuff behind your back and enables things you didn't ask for.

And a brief digression into "you think there's no work to be done here? I submitted miniconfig mode to the kernel a dozen years ago and they pulled the "do buckets of unrelated work to placate me or this won't go in" crap which I didn't so it didn't (I usually just resubmit a year later and go "Is whoever was gatekeeping gone yet?"); this guard symbol nonsense happened since, the magic special casing of CONFIG_EMBEDDED happened since... Oh and I should definitely mention the half-decade I spent removing perl as a build dependency and similar amount of time the squashfs guy spent trying to get his thing in, and maybe segue into how "Signed-off-by" is part of the layers of bureaucracy that's grown up around an aging project...

I don't worry about people stealing my ideas, it's far more work for me when they _don't_.

January 5, 2017

My talk "Building the simplest possible Linux system" got accepted to ELC (in Portland in late February). This is not work-related, and I'm paying my own way to this one.

This talk is basically on mkroot, which means I need to wean it off of busybox between now and then. Because "simplest possible" isn't going to have two userspace packages, and if I _can't_ I should show them busybox instead of toybox, which would be sad.

The simplest self-hosting system is (conceptually) 4 packages: toolchain, kernel, libc, cmdline. For a leaf system drop the toolchain, static link and handwave the libc as part of the toolchain (why Ulrich DrPepper was wrong about static linking), and replace cmdline with "app" which can be hello world. To run the result, you need a runtime (board or emulator), bootloader (qemu -kernel, system in ROM), and root filesystem (rant about 4 types of filesystems: block backed, pipe backed, ram backed, synthetic).

Hmmm... I need to explain defconfig files and miniconfig ("here is a 20 line kernel config file where every line means something"). Architecture selection and cross compilers. Booting vmlinux under qemu, bootloaders, and different binary types. Root filesystems (initramfs/initmpfs vs initrd vs root=). Demonstrate a kernel launching "hello world" and talk about why PID 1 is unique, walkthrough oneit.c and switch_root vs pivot_root. Directory layout (why LFS died, why posix is useless, and why /usr happened...). Walkthrough of a simple init script (and I should resubmit my CONFIG_DEVTMPFS_MOUNT patch that makes it work for initramfs). Probably a little on nommu systems (fdpic vs binflt, logically goes in the part where I describe _what_ the ~20 config entires in the kernel miniconfig do, and what differing target configs look like ala the aboriginal linux LINUX_CONFIG target snippets... And of course device tree vs non-device tree, dtb files and the horrible bespoke syntax du jour format that grew rather than was designed for device trees (and how its documentation is chopped into hundreds of little incoherent files in the kernel source, using a license that ensures BSD and such will never use it, which is why windows extended ACPI to Arm...)

Oh, kernel command line options (supply them, find them in the docs, find them in the source), how unrecognized name=value arguments become environment variables (which are not stack/heap/data/bss/mmap, see /proc/self/environ... Really "how linux launches a process". Which means do a quick ELF walkthrough: text, data, bss, heap, stack, that stupid TLS crap (basically a constant stack frame with its own reigster). Also #! scripts. Dynamic vs static linking: fun with ldd and readelf, the old chroot-setup script...)

Really if we're doing "simplest possible" I should demo hello world on bare metal and gluing your app to u-boot. Because as soon as you add "Linux" your overhead goes up by a megabyte. (Anecdote about Norad's Cheyenne Moutain display running busybox because they had to audit every line of code they ran it and it was much easier than auditing gnu/crap.)

But first, I need to get the toybox shell basically usable. I have a month and a half. Hmmm... (And the last file in "make install_airlock" that's not on my host system is ftpd, because there's no standard host reference version that can agree on a command line syntax.

January 4, 2017

At Texas Linux Fest last year I signed up for the Austin Checktech mailing list, and they've emailed out volunteer opportunites every month or so since then. I have not been in town for a single one of them yet, because of the travel required by my "we can only afford to pay you half time for 6 months now" $DAYJOB. The upcoming one is a design workshop on January 16, during the Tasmania trip. (And I still don't know if I'll be back to drive Fuzzy to her fencing tournament on the 27th.) I love the people, I love the technology, but this is getting old.

I got the first pass of ftpget checked in last night. (Yes there was one in pending, but I was 2/3 of the way through a new one before I noticed.)

Running my own code under strace, glibc's getaddrinfo() call is doing insane amounts of work. It's opening a NETLINK socket, doing sendto() and a bunch of recvmsg() and then opening a second socket to connect to "/var/run/nscd/socket". I'm testing against "", there is no excuse for this. I tried adding a loop to my xconnect() function to do an AI_NUMERIC lookup first, and only try a non-numeric lookup if that failed (and if the user requested it), but it's still doing all that crap for a numeric lookup. (You can tell from the string if it's a numeric address.)

You wonder why there's 8 gazillion weird security holes every year? Oh well, hopefully linking against musl and bionic isn't this broken.

Sigh. I got ftpget finished and checked in, but haven't done the test stuff for it yet. Instead I did a "git diff" on my tree to see what kind of other fires I left burning, and here's a typical example:

$ git diff toys/*/dmesg.c | diffstat
 dmesg.c |   73 +++++++++++++++++++++++--------------------
 1 file changed, 38 insertions(+), 35 deletions(-)

I.E. "I was working on a thing and got interrupted. Again." This was something I did in San Diego last month, in reponse to Elliott sending me basically a complete rewrite of dmesg for a new API that Kay Sievers crapped all over the kernel. It is a very, very bad API.

But I can't work on it now. I need to go do GPS stuff for $DAYJOB instead. Endless, forever GPS stuff. (Because rtklib is only half an implementation, Andrew Hutton's code is GPLv3 so we can't use any of it, and basically everything else parses the output of an integrated on-chip solution that doesn't give us the data we need.)

January 3, 2017

The on-again off-again Tasmania trip looks on again? They're trying to involve another trip to Japan, which I'm ambivalent about. As much as I adore hanging out there I can't make plans (Fuzzy wants me to drive her to a Fencing tournament 15 minutes from home on the 27th: will I be able to? I have NO IDEA...) and I'm starting to develop a precursor to vericose veins from all the long plane flights, and my thighs object to adding an extra 12 hour leg between Japan and Tasmania. (United and Greyhound have equally uncomfortable seats.)

Still, yay talk. I've never spoken at this conference before, it's an honor to be accepted, and I'd really like to be able to do this.

Yesterday turned out to be a day off, which I found out when nobody but Rich was on the 5pm call. I texted jen and found out it was a holiday since New Year's was on sunday... at the end of the work day. So I'm taking today as that day off since I did GPS stuff monday. (Not in a hugely effective manner but it still counts.)

Trying to finish up ftpget, which is a thing that busybox apparently dreamed up, or at least there's no standard for it I've been able to find. I've been using it forever because it's a really easy want to script sending a file from point A to point B, and it's a thing I need for toybox make airlock/mkroot to do its thing with the full QEMU boot and control images and all that: the QEMU image dials out to an FTP server on the host's loopback and does an "ftpput" of its output files. This is the simplest way I've found of sending a file out through the virtual network, or from a chroot to a host system. Yes there are like 12 other ways, and I'd happily have a virtual filesystem be the way if qemu had a built-in smbfs server. Alas it has virtfs, which combines virtio with 9p (both turn out to be full of rough edges), using it requires QEMU be built against strange libraries on the host (it's just extended attributes, why would you need libcap-devel and libattr-devel) and then the setup is crotchety and I should probably revisit it someday, but my own todo item there is doing a simple smb server in toybox. So for the moment: implement ftpget.

The ftp protocol is only moderately insane, as in you can carve a reasonable subset out of it and ignore the rest... until you try to make cryptography work. Alas, even in "passive" mode you still have to open a second connection (http's ability to send the data inline in the same connection was apparently a revolutionary advance), which is why masquerading routers have to parse ftp connections, and don't ask me how sftp is supposed to work. But for sending files one hop on a local network, it still works.

(An aside: I should clean up and check in my netcat mode that prints the outgoing data in one color and the incoming data in another color, either to stdout or to a file. It's very useful logging. Doesn't quite do it for FTP because there's that whole second channel data goes along, but it lets you see what the control logic actually looks like going across the wire. But THAT's tied up with the whole "stop gcc's liveness analysis from breaking netcat on nommu in a semi-portable way" changes, which boils down to either "move everything to a function" or "move everything to globals", or possibly I can just hide it in XVFORK() which would be nice...)

The main problem with ftpget/ftpput is there are several other things you need to be able to do, which ftpget has no provision for, and I'm not adding: ftpls, ftpls-l, ftprm, ftpmv, ftpmkdir, and ftprmdir. Instead, I want to add flags to ftpget, which implies that ftpput is also an ftpget flag (with the command name being a historical alias with a different default action).

This gives me rather a lot to test, which raises the "how do I script these tests" issue. This test is tricky because I need netcat and ftpd to test ftpget. I need netcat because I've been testing against busybox ftpd (it's there, and no two ftpd command lines seem quite the same) and that only works on stdin/stdout.

This means "make test_ftpget" requires ftpd and netcat to be in the host $PATH, and the right _versions_ of each. (The toybox versions, not whatever weird host versions might have their own command line syntax. Busybox has two different ones for netcat. Back before I relaunched toybox and was trying to get back into busybox development this is one of the things that drove me away again.)

Sigh. I should start todo list just for items that would be a full-time job for somebody. A real Linux standard that documents what things do (a role Posix abdicated in favor of continuing to host Jorg Schilling, and the Linux Foundation abdicated in favor of accepting huge cash donations from Red Hat and Microsoft). A j-core+xv6+musl+toybox course teaching people C programming, VHDL, and both processor and operating system design. Where "course" is probably a 2-4 year program. A "hello world" kernel like I complained about years ago...

Anyway, back to ftpget...

January 2, 2017

In lieu of getting to spend the week between christmas and new year's catching up on toybox and mkroot, I got a 3 day weekend.

(Ok, I explained how my normal working style involves round-robining between different tasks so that when I'm blocked on one I can make progress somewhere else, and that I can focus past this to hit deadlines because the deadline is its own safety valve letting me know when I can _stop_ caring about this topic, but after the deadline I usually need extra recovery time and in the absence of deadlines a round-the-clock push that never ends is called a "death march". And that after three weeks focusing round the clock on GPS in tokyo including evenings and weekends, two more weeks in san francisco, and having GPS looming over my head as "the priority" the rest of the time while I was putting out other fires, being told the tuesday between christmas and New Year's "You don't get vacation and you can't cycle to lower priority things for recovery time, you will work on GPS and nothing else until further notice"... Yeah, that's building up to the kind of "I can't stand to look at this code, it viscerally disgusts me" that made me leave BusyBox. Yes it's a failing, but after a couple decades of doing this I know my limits. I haven't actually had a _vacation_ since I joined this company over 2 years ago, and since November have been in Tokyo, Austin, Minneapolis, San Franciso, and San Diego, with a trip to Tasmania pending (but not yet funded).)

Still: I managed to beg Friday off and took the weekend, and got a _little_ caught up on toybox and mkroot.

My mkroot project is happening because Aboriginal Linux died. I introduced the idea of ending the project here, and announced the mkroot stuff here, but finding time to work on it's been hard.

The problem is the toolchain packages have been frozen on the last GPLv2 releases for years because I'll never voluntarily ship GPLv3 binaries in a hobbyist context, and nobody regression tests against those versions anymore. In the course of 2 kernel releases last year they broke the build on 4 different architectures (for 4 different reasons). It wasn't anything I couldn't fix, but it was more than I could keep up with in the time I had available, and I fell far enough behind catching up was more work than it was worth. I'd been meaning to switch to a new (llvm/clang) toolchain for ages, other people did their own toolchains, and when Rich did his own musl toolchain builder, I went "sure, let's use that one".

But taking the toolchain build out of the project meant there wasn't enough left to justify the rest of the infrastructure (such as the download and package cache stuff extracting and patching tarballs), and I did a really simple root filesystem build that fits in one file (building a usable toybox-based root filesystem, using musl-cross-make, in a single 300 line shell script), and went "huh, how much of the rest is actually needed"?

I ported the stuff to the toybox build as a new "make install_airlock" target, the environment variable sanitizing is more or less a single "env -i" call with a half-dozen variable whitelist...

I'm still not quite sure where to host it: at first I had it attached to the j-core account (back when $DAYJOB let me spend time on it and it was of use to them), then I put it on my github, but really the idea is to build a simple initramfs and I should probably just add a "make install_root" target to toybox. But to do that, I need to clean the busybox dependency out of it, which comes in two parts:

The script is downloading and compiling busybox, with a config file that results in:

bunzip2 bzcat bzip2 gunzip gzip hush ping route sh tar unxz vi wget xzcat zcat

Toybox's new "install_airlock" has a $PENDING command list it symlinks out of the host $PATH (grep PENDING= scripts/ currently containing:

bunzip2 bzcat dd diff expr ftpd ftpget ftpput gunzip less ping route tar test tr vi wget zcat awk bzip2 fdisk gzip sh sha512sum unxz xzcat

The mkroot list is a subset of the airlock list, but eliminating the mkroot list lets me merge into toybox. That said, the airlock list is what's necessary for a "hermetic build", which is of interest to the Android guys. (Ok, it's just the tip of the iceberg for what they need, but still.)

Of the 15 commands in the mkroot list, I have good implementations of bunzip2, bzcat, gunzip, and zcat already. (They're needed as busybox builtins due to the way busybox tar works.) I can do bzip2 and gzip reasonably easily (I did most of bzip2 a decade ago, the problem is the string sorting plumbing is just sort of a heuristic I never understood well enough to write my own; gotta dig into the math of the various sorting approaches and understand why the fallbacks trigger that way) but am not sure bzip2 compression side is actually necessary? (It's obsoletish. There's no xz compression side logic either but I'd probably just want to do lzma without the instruction-specific compressors there, if that's an option). The ping, route, and tar commands are cleanups/rewrites of stuff in pending, as is xzcat/unxz (but that's a way bigger cleanup, that _does_ need the architecture-specific instruction compressors to deal with existing archives). The reason I haven't finished gzip yet is it's not clear where the dictionary resets should happen (nobody quite agrees, "every 250k of input" is probably reasonable; using SMP for compression/decompression is related to this). I've wrestled with wget a bit already and will probably just end up rewriting it.

The really hard commands are hush/sh and vi. I don't strictly need vi to build, although that one's not hard (just elaborate). But you can't build without a shell. And _what_ you build with the shell is... squishy. Unclearly defined. I need lots of scripts to run through the shell to see where the behavior diverges and fix it, but I haven't built Linux From Scratch under hush either, so... (Sigh. Probably set up a chroot with bash, automate the current LFS build under that, and use that to make toysh tests).

So I need to tackle toysh in order to merge mkroot into toybox "make install_root", which means it has to live on its own for a while. :P

But in the meantime, I think I've gotten most of the aboriginal linux plumbing to the point I can ignore it. There are some nice bits I'm not reimplementing (mainly the package cache stuff), but not stuff I actually _need_ and it was always annoying to try to explain it anyway.

And after all that: what I actually spent the weekend banging on in toybox was mostly ftpget. (I noticed there's one in pending after I was halfway done writing a replacement. Sigh. It's one of the three things "make install_airlock" complains the host hasn't got, because my host is ubuntu not busybox.)

January 1, 2017

I aten't dead.

Last January 1, I mentioned that "update the darn blog" was one of my patreon goals, and in December it got hit! (Woo! Ok, I bumped the goal amount down a while back, but it got hit!)

I posted a patreon update explaining how my blog got constipated this time. I suppose I should explain it here as well (although that one has links).

I'm still riding a startup down (something I swore I'd never do again but I love the technology and the people). Since they haven't been able to pay me a full-time salary since June, I eventually took a side gig doing a space thing to refill the bank account, then found out why not only will China and India get to mars before the US will but Vatican City probably will too: ITAR export regulations! Yes, the same insanity that (back in the 90's) meant openpgp/openssh/openssl were developed in canada and germany and could only be downloaded from non-US websites, but you weren't allowed to upload them _back_ out of the country because the US government said that was exporting munitions. This caused US cryptographers to move overseas and give up their US citizenship because otherwise they couldn't work in their chose field.

It turns out this insanity was extended to the US space program in 1996 when we sold some crypto hardware to China on the condition that they couldn't examine it, just shoot it into space. Armed guards followed it to china, where they launched it on a rocket that exploded, and the hardware was never recovered. The resulting scandal extended ITAR to the whole space program. As my boss explained to me (the Friday of my first week there), "If I buy a screwdriver at home depot, it's just a screwdriver, but once I use it to turn a screw on a spacecraft it's now a munition and cannot be discussed with non-US persons".

As far as I can tell, this is why the US no longer has a space program to speak of (Commander Hadfield is Canadian), and why people like me don't want to get any of it on them. (It was _really_ fun for me still doing evening and weekend work on projects for a canadian company with most of its engineers in Japan, and maintaining Android's command line utilities as my main hobby project. Yeah, not a comfortable position. I know where there "proprietary vs non-proprietary" lines are, but this ITAR crap? That's "covered with spiders" leve of get it off me.)

This is why I stopped blogging, unsure what exactly I could say until I'd disentangled myself from that job (which took a while), and then I was out of the habit and way behind... (This method of bloging still has the problem that I can't post things out of order. I can _write_ them out of order, but the RSS feed generation plumbing is really simple and I have a personal rule of not editing old entries, even though it's just a text file I compose in vi.)

So, new year, new file. I'm still riding the startup down. Originally this was supposed to be until our next round of funding in October, but that came and went and I'm still paid half-time (but expected to work _more_ than full-time) without even a new deadline where the "funding knothole" might resolve. Lots of travel (which they pay for, but don't reimburse my receipts anymore). Wheee.

One of the big reasons I enjoyed this job so much is they used my open source projects, but recently they've switched to "you need to do closed source GPS correlator software to drive our patented hardware, to the exclusion of all else", and for several months I haven't had time/energy to advance toybox much. They even yanked christmas break out from under me. So, not sure how much longer that's going to last...

(The _most_ awkward part is I proposed a talk at LCA on their technology, but no tickets have been booked, we haven't had time to prepare talk material, I can't do the talk by myself and I'm not paying my own way to Tasmania to do it. It's an honor to be accepted and if I was going to cancel I should have given them a full month's notice, but I still don't know if this talk is actually going to happen. Or if it's going there and back or bouncing off there to go to Tokyo for a third multi-week intensive focus on the proprietary GPS stuff that I'm pretty burnt out on these days.)

Back to 2016