What I would like to see emerge as technology is “compute docking”. A dock which provides, as part of the peripherals, more CPUs and RAM.
This partially demonstrates a failure of software, in that the operating systems approaches in widespread use today have abandoned the idea of the OS and trust boundaries being spread over multiple machines. You get clusters, and software written to run across clusters with a lot of heavyweight infrastructure for scheduling, deployment, etc. But approaches such as Amoeba OS, Sprite and other “distributed operating system” are still only research projects (mostly defunct).
We don’t have “my laptop, with compute automatically farmed out to two machines in my basement”; we have SSH to run some jobs remotely, we have some distributed filesystems suitable for non-corporate use appearing (Camlistore, Minio), we have home NASes; we have distcc for farming out software compilation. We don’t have generic “all software might make use of this” extensions of system capacity.
The core of a computer system is “trust and identity”. Private keys, password managers, collections of documents which you want with you at all times (but don’t want to lose if the device is stolen). For mobile developers, the ability to write code, no matter where you are and long battery life, not the greatest gaming display.
When I get home, I can hook a computer up to an external display, external keyboard, external mouse, external disks (and auto-backup). But not more CPU or RAM. So to drive the external display, I need enough CPU and RAM in the core mobile system. I can’t suddenly bring multiple concurrent VMs out of hibernation to work on the distributed problem I’m programming against, without farming them out to other machines.
What does it mean to trust hardware enough to let your code run “natively” on it though? What are the opportunities for malware to spread via the docking station resources (infected firmware, or just extra logic gates to steal keys)? What does an OS need to do to be able to keep all access to local private keys “safe”? The same things you get from using a TPM anyway?
What does it mean to “dock” in such a scenario? Should simply being on the same WiFi network be enough? Graduated feature enablement so that if you connect a USB-C cable then external graphics displays run at 4K?
Why a laptop? Should a smartphone just be the core unit? A “wand”? A watch?
X11 tried to address some of this, but is today just a local graphics system for Unices, where people are surprised that it even can listen on IP networks.
Where is the security distinction between docking to extra CPU and RAM in a docking station, or using a distributed OS? What are the necessary trust boundaries and features required for enforcement?
How much configuration should one person have to do, instead of having this all “just work” with the NAS bought at the local office supply or electronics store?
Is it time to bring back Plan 9?