Personal computing is stuck in the 90’s

I’m amazed at how much work we’ve done to improve the building, updating, maintaining and scaling of massive service oriented computing

But obviously, I’m still frustrated.

I have around 5 different display+keyboard interfaces that I use daily, but it still seems to matter which one I’m using. Which means at least once a day, I have to think about which system I was working on when I tried something out (or started a Change request on).

I don’t want to go back to client-server, where the data, code, execution etc all runs on “the big server” (though I have one) – I’m not always connected to a fat enough pipe, and interactive video things then become weird – I’ve tried running a zoom session (or video) on a remote system – it’s ok, until something happens.

I’m also not satisfied with just using each app’s sync for some reason.

Really, why can’t I just have my data and programs be where it makes sense, and to not think about that at all (pretty much the opposite requirement from Kubernetes – which is fundamentally about making sure that data and program is running within spec, inside a specific cluster boundary).

So what does this look like?

First up, I need a secure identity for each computer, for each data-set, program, me, and anyone else I want to share those things with.

Then, I need a secure network system, in which I can use those identities to ensure data-sets can only by accessed by computers, people, and programs that I feel they are safe on.

Interactive programs need to be mobile – if it’s an interactive workload, then chances are, I want the UI to run on whatever screen I just moved to – and the data should come too.

Batch programs, need to be able to run – preferably without impacting my interactive computer – but without me needing to notice that they’re offloaded to elsewhere.

Nathan and I were working on one end of this problem – leveraging the Solana blockchain for the identity component, and as a persistent and auditable store of what nodes were connected together (and then what application ports might be shared).

BUT – we need more – very specifically, a network filesystem that creates a virtual filesystem that auto-caches the data needed by any node.

And, we need a way to differentiate interactive workloads (browser, vscode, video and audio editing) from true batch jobs (code compilation, data analysis, automation), and for our shell (GUI or TUI or workflow) to automatically choose the right scheduler to decide where and how to run it.

>>> OK, the above paragraph makes me thing that webUI was right – and what might not be working for me, is how all my browser use is represented as a single program with multiple windows and huge numbers of tabs. <<<< MAYBE, everything is a batch job – some just desire one of their data sources (keyboard/mouse/webcam/microphone/speaker) to have a specific locality to the user.

Bryan and I were talking, and riffing on a similar idea wrt the install/boot cycle for computers – if we have a global network filesystem, we should be able to turn on a computer, it chooses from a list of OS/UI types streams that OS from the network, and then as it works out where it is and who is using it, it pulls down the interactive (and batch) data and programs it needs. The disk is then only used to speed up future uses of those things – output data is written primarily to the network filesystem.

I _THINK_ this means that an internet distributed job submission system like Bacalhau really needs an interactive shell that runs on a virtual overlay filesystem that _looks_ familiar. Good luck with the hard bits – is docker run -p 80:80 nginx a batch job with a file based output? and how is it differentiable from docker run run-weather-prediction ? Do we need a crowdsourced library of npm run build vs npm run dev – and oh, those are not codebase specific aliases set in each package.json :(.

Funniest thing is – the user facing endpoint, is still the same as the idea we were tossing around back in the 1990’s (Hi! to the the peeps that came to our place to talk about a ‘portable disk+cpu’ that could be plugged into a luggable, a desktop, or a network 🙂 ) – and we still don’t have that – every computer (phone, IoT device) is an island in a sea of variable quality networking.

I guess I’m going to continue working on components of the dream 🙂

Author: Sven Dowideit

You might remember me from tools like http://TWiki.org, http://Foswiki.org, https://github.com/docker/Boot2Docker, Docker documentation, or https://github.com/rancher/os

Leave a Reply