Over twenty years ago, we said it was a bad idea. Then the tables were turned, in the name of making the Internet commercially viable, and we’ve been living with the consequences ever since. The current “information economy” (aka, software and services spying on users) is “mobile agents” in reverse.
A quarter of a century ago, when the Internet was just blooming in the world, and technology innovation was everywhere, there was discussion of software agents. These were typically outlined as bits of code that would “act on your behalf”, by transporting themselves to servers or other computing devices to do some computation, and then bring the results back to your device. Even then, there was enough security awareness to perceive that remote systems were not going to be interested in hosting these foreign code objects, no matter how “sandboxed”. They would consume resources, and could potentially access sensitive data or take down the remote system, inadvertently or otherwise.
I know, right? The idea of shipping code snippets around to other machines sounds completely daft, even as I type it! For those reasons, among others, systems like General Magic’s “Magic Cap” never got off the ground.
And here is the irony: in the end, we wound up inviting agents (literally) into our homes. Plugins like ghostery will show you how many suspicious bits of code are executing on your computer when you load different webpages in your browser. Those bits of code are among the chief actors in the great exposition of private data in today’s web usage. You’re looking at cute cat pictures, while that code is busily shipping your browser history off to some random server in another country. Programs like Firefox do attempt to sandbox some of the worst offenders (e.g., Facebook), but the problems are exactly the same as with the old “agent avatar” idea: the code is consuming resources on your machine, possibly accessing data it shouldn’t be, and generally undermining your system in ways that have nothing to do with your interests.
With the growing sense of unease over this sort of invasive behaviour, the trend is already being slowed. Here are two of the current countervailing trends:
- Crypto, crypto everywhere — blockchain your transactions and encrypt your transmissions. That may be necessary, but it’s really not getting at the heart of the problem, which is that there is no respect in information sharing in transactions. Take your pick of analogy — highway robbers, thumbs on the scale at the bazaar, smash-and-grab for your browser history, whatever.
- Visiting increasingly specific, extra-territorial regulation on the Internet, without regard for feasibility of implementation (GDPR, I’m looking at you…). Even if some limited application of this approach helps address a current problem, it’s not an approach that scales: more such regulation will lead to conflicting, impossible to implement requirements that will ultimately favour only the largest players, and generally pare the Internet and its services down to a limited shadow of what we’ve known.
A different approach is to take a page from the old URA (“Uniform Resource Agent”) approach — not the actual technology proposal, but the idea that computation should happen (only) on the computing resources of the interested party, and everything else is an explicit transaction. Combined with the work done on federated identity management, those transactions can include appropriate permissions and access control. And, while the argument is made that it is hard to come up with the specifics of interesting transactions, the amount of effort that has gone into creating existing systems belies a level of cleverness in the industry that is certainly up to the challenge.
Who’s up for that challenge?