Internet

Last week I learned something about BitCoin that entirely changed my view on the urgency for implementing uniform security in the global routing system. While I value and appreciate that the Internet is made up of a network of independently owned and operated networks, I think there is compelling reason for network operators to peer over the parapet of their network borders and focus on routing security as a contribution beyond their own realm.

For most of the history of the Internet, the being in the routing business meant delivering packets on a “best effort basis”. In practical terms, as the Internet has gotten more commercially important, that has meant that individual network operators have focused on improving the efficiency and effectiveness of traffic within their own networks. For handoffs between networks (routing to the rest of the world), the emphasis has been on ensuring connectivity to well-positioned neighbour networks.

Continue Reading

Over twenty years ago, we said it was a bad idea. Then the tables were turned, in the name of making the Internet commercially viable, and we’ve been living with the consequences ever since. The current “information economy” (aka, software and services spying on users) is “mobile agents” in reverse.

A quarter of a century ago, when the Internet was just blooming in the world, and technology innovation was everywhere, there was discussion of software agents. These were typically outlined as bits of code that would “act on your behalf”, by transporting themselves to servers or other computing devices to do some computation, and then bring the results back to your device. Even then, there was enough security awareness to perceive that remote systems were not going to be interested in hosting these foreign code objects, no matter how “sandboxed”. They would consume resources, and could potentially access sensitive data or take down the remote system, inadvertently or otherwise.

I know, right? The idea of shipping code snippets around to other machines sounds completely daft, even as I type it! For those reasons, among others, systems like General Magic’s “Magic Cap” never got off the ground.

And here is the irony: in the end, we wound up inviting agents (literally) into our homes. Plugins like ghostery will show you how many suspicious bits of code are executing on your computer when you load different webpages in your browser. Those bits of code are among the chief actors in the great exposition of private data in today’s web usage. You’re looking at cute cat pictures, while that code is busily shipping your browser history off to some random server in another country. Programs like Firefox do attempt to sandbox some of the worst offenders (e.g., Facebook), but the problems are exactly the same as with the old “agent avatar” idea: the code is consuming resources on your machine, possibly accessing data it shouldn’t be, and generally undermining your system in ways that have nothing to do with your interests.

With the growing sense of unease over this sort of invasive behaviour, the trend is already being slowed. Here are two of the current countervailing trends:

  • Crypto, crypto everywhere — blockchain your transactions and encrypt your transmissions. That may be necessary, but it’s really not getting at the heart of the problem, which is that there is no respect in information sharing in transactions. Take your pick of analogy — highway robbers, thumbs on the scale at the bazaar, smash-and-grab for your browser history, whatever.
  • Visiting increasingly specific, extra-territorial regulation on the Internet, without regard for feasibility of implementation (GDPR, I’m looking at you…). Even if some limited application of this approach helps address a current problem, it’s not an approach that scales: more such regulation will lead to conflicting, impossible to implement requirements that will ultimately favour only the largest players, and generally pare the Internet and its services down to a limited shadow of what we’ve known.

A different approach is to take a page from the old URA (“Uniform Resource Agent”) approach — not the actual technology proposal, but the idea that computation should happen (only) on the computing resources of the interested party, and everything else is an explicit transaction. Combined with the work done on federated identity management,  those transactions can include appropriate permissions and access control. And, while the argument is made that it is hard to come up with the specifics of interesting transactions, the amount of effort that has gone into creating existing systems belies a level of cleverness in the industry that is certainly up to the challenge.

Who’s up for that challenge?

What is the news:  publication of the updated security standard for Internet transport layer security:  TLS 1.3

Why it matters:  TLS provides the basis for pretty much all Internet communication privacy and encryption.  The big deal with version 1.3 of TLS is that it has been stripped of features with previously-detected vulnerabilities, and extended its security and encryption.  TLS 1.3 should be more robust/even less vulnerable than TLS 1.2.

Who benefits:  TLS 1.3 only benefits people using applications and devices that implement it.  The good news is that, apparently, major browsers have already implemented and deployed it.  Additionally, the hope is that the lighter weight, more straightforward nature of TLS 1.3 (as compared to previous versions) will be attractive to other application and device developers that have been reluctant to implement TLS in the past.

More info:  https://www.ietf.org/blog/tls13/

Last week, I had the opportunity to attend and speak at Interop ITX 2018, in Las Vegas.  It was my first Interop — and an interesting opportunity to see more of the enterprise networking side of things.  That’s a space that is growing, and increasing in complexity, even as cloud and software as a service are notionally taking on the heavy lifting of corporate IT.

The refrain that I overheard several times captured that attendees really enjoyed having a vendor-neutral conference to talk about a variety of practical topics.  Indeed, it felt a bit like a NOG with an enterprise focus.

Continue Reading

“Permissionless innovation” is one of the invariant properties of the Internet — without it, we would not have the World Wide Web.   Sometimes, however, this basic concept is misunderstood, or the expression is used as an excuse for bad behaviour.

Consider the case of the Guardian’s article on the use of “smart” technology, such as wifi trackers that tail people through their phones’ MAC addresses, to monitor activities in Utrecht and Eindhoven:  https://www.theguardian.com/cities/2018/mar/01/smart-cities-data-privacy-eindhoven-utrecht:

“Companies are getting away with it in part because it involves new applications of data. In Silicon Valley, they call it “permissionless innovation”, they believe technological progress should not be stifled by public regulations.”

Continue Reading

“Cooperation”, “Consensus” and “Collaboration” are three C-words that get thrown around in the context of Internet (technology and policy) development.    Given that it is at the heart of the Internet Engineering Task Force’s organizing principles, I was a little surprised  to see consensus treated as a poor discussion framework in Peter J. Denning and Robert Dunham’s “The Innovator’s Way – Essential Practices for Successful Innovation”.

While I still don’t entirely buy the authors’ view of consensus as a force of creativity stifling, with a little more reflection, I could see their argument that consensus aims to narrow discussion to find an outcome.  In an engineering context of complex problems,  when the problem is well understood and an answer has to be selected, that’s a good thing.

However, for many of the challenges facing the Internet, there isn’t even necessarily agreement that there is a problem, let alone a rough notion of what to do about the challenge.    These are wicked problems, requiring more collaboration across diverse groups of people and interests.   The heartening thing  is that we’ve actually solved some of these wicked problems in the past — the existence and continued functioning of the Internet is testimony to that.

Continue Reading

field-839797_1920

At any given moment, the Internet as we know it is poised on the edge of extinction.

There are at least two ways to understand that sentence. One is pretty gloomy, suggesting a fragile Internet that must continually be rescued from doom. The other way to look at it is that the Internet is always in a state of evolution, and it’s our understanding of it that is challenged to keep up. I tend to prefer the latter perspective, and think it’s important to keep the Internet open for innovation.

At the same time, change can be scary — if it leads to an outcome that impacts us badly, from which we cannot recover, for example. That’s at least one reason why discussion of policy requirements and changing the Internet can be pretty tense.

If we want a dialog about the Internet that is as global as the network itself, we need to know how to talk about change:

  • what are the useful points of reference (hint: they aren’t particular technologies), and
  • how can we frame a successful dialog?

Continue Reading

I had a fun time on  ISOC-DC’s #5in5in5 panel — talking about five things that will be different about the future of the Internet, in 2020.  There’s video of the panel available on ISOC-DC’s livestream TV site:  http://www.isoc-dc.org/isoc-dc-tv/ .

My 5 points, delivered within 5 minutes, covered:

  1. The Internet continues to try to “get out of the box” — increasingly, we don’t see “the Internet” as separate from our tools or tasks.  They merge.  Sometimes  it’s seamless — you might carry out a text message conversation with someone from your phone, and then answering from your computer as you walk by, and picking it up again on your iPad.  Each device has all the context.  You’re not thinking about which network you’re using to send the messages.   The downside is  the loss of individual control over your Internet experience — having your car hacked over the net while you’re driving it is the downside of all that invisible interconnectedness.  Also, the Internet becomes opaque, another packaged commodity, and a lot less likely to be something we can all hook into, climb onto, and understand.
  2. Various forces — policy makers, big business, whatever — are trying to put the Internet into a box, or some structure.  For example, regulations requiring that the Internet not serve particular information beyond geographic boundaries is essentially implemented by aligning the actual Internet network with those geopolitical limits.  I have had a lot to say about the challenges with that approach, but suffice to say that the Internet was not built to pay attention to political lines, and imposing those structures reduces its resiliency and its efficiency and effectiveness.
  3. New approaches are needed!  If these approaches to regulation and restriction are not going to work (because they reduce the Internet to something unusable), then we need a different way to talk about the Internet, the services that run over the Internet, and how we articulate and enact policies that relate to them.  Don’t try to curtail web page access by making laws requiring ISPs to delete entries in DNS, figure out a better way to get international policy to common ground on what makes inappropriate use of the Internet.  (Make the action illegal, not the tool).
  4. And yet, everything old is new again.
    1. The Internet was developed as an inter-network — making a whole out of disparate parts collaborating.
    2. We could not have seen the kind of IPv6 deployment we have today if large, competitive web companies hadn’t stood up to do World IPv6 Day and World IPv6 Launch.
    3. The future is better if we can regain and foster more of that sense of cross-industry collaboration to find solutions that are best for the Internet as a whole.
  5. As I recall saying at an ISOC-DC “Future Internet” panel some years ago, if I could tell you what the Internet was going to be in 2020, it wouldn’t be the Internet, now would it?!  The beauty and power of the Internet is that it is a platform that supports creativity, communication and development for everyone, and we have no means to size the depth and breadth of that much creative energy.  So much of what we now consider “normal” in the Internet — say, FaceBook — was unthinkable until someone thought of it and built it.  If the Internet loses its ability to support that kind of novel development, then it’s not the Internet anymore, it’s just another network.

It was a fun panel — good discussion with the attendees, too.   From the comments that came up during the session, it’s clear that people have very real concerns about where we’re going with the Internet as a platform for “permissionless innovation”, while ensuring that we retain some level of privacy and management of our personal information.

As Mike Nelson said, moderating the session — there’s enough meat in each of the topics we brought up to fuel a semester long university course!

Today, I want to share with you something that I’ve been working on for the last several months — a concrete vision and proposal for supporting the Internet’s development.

CCDI-Spelled-Out-20150715

For some, “Internet development” is about building out more networks in under-served parts of the world. For others, myself included, it has always included a component of evolving the technology itself, finding answers to age-old or just-discovered limitations and improving the state of the art of the functioning, deployed Internet.  In either case, development means getting beyond the status quo.  And, for the Internet, the status quo means stagnation, and stagnation means death.

Twenty-odd years ago, when I first got involved in Internet technology development, it was clear that the technology was evolving dynamically.  Engineers got together regularly to work out next steps large and small — incremental improvements were important, but people were not afraid to think of and tackle the larger questions of the Internet’s future.  And the engineers who got together were the ones that would go home to their respective companies and implement the agreed on changes within their products and networks.

Time passes, things change.  As an important underlay to the world’s day to day activities, a common perspective of the best “future Internet” is — hopefully as good as today’s, but maybe faster.   And, many of the engineers have gone on to better things, or management positions.  Companies are typically larger, shareholders a little more keen on stability, and engineers are less able to go home to their companies and just implement new things.

If we want something other than “current course and speed” for the Internet’s development, I believe we need to put some thoughtful, active effort into rebuilding that sense of collaborative empowerment for the exploration of solutions to old problems and development of new directions — but taking into account and working with the business drivers of today’s Internet.

Clearly, it can be done, at least for specific issue — I give you World IPv6 Launch.

Apart from that, what kinds of issues need tackling?  Well, near term issues include routing security as well as fostering measurements and analysis of the currently deployed network.  Longer term issues can include things like dealing with rights — in handling personal information (privacy) as well as created content.

I don’t think it requires magic.   It might involve more than one plan — since there never is a single right answer or one size that fits all for the Internet.  But, mostly, I think it involves careful fostering, technical leadership, and general facilitation of collaboration and cooperation on real live Internet-touching activities.

I’m not just waving my hands around and writing pretty words in a blog post.  Earlier this year, I invited a number of operators to come talk about an Unwedging Routing Security Activity, and in April, we had a meeting to discuss possibilities and particulars.  You can out more about the activity, including a report from the meeting here.

That was a proof point for the more general idea of this “coordination” function I described above — for now, let’s call it the Centre for the Creative Development of the Internet, and you can read more about that here:  http://ccdi.thinkingcat.com/ .

In brief, I believe it’s possible to put together concrete activities that will move the Internet forward, that can be sustained by support from individual companies that have an interest in finding a collaborative solution to a problem that faces them.  The URSA work is a first step and a proof point.

Now the hard part:  this is not a launch, because while  the idea is there, it’s not funded yet.  I am actively pursuing ways to get it kick started, with to be able to make longer term commitments to needed resources, and get the idea out of the lab and working with Internet actors.

If you have thoughts or suggestions, I’m happy to hear them — ldaigle@thinkingcat.com .   Even if it’s just a suggestion for a better name :^)  .

And, if we’re lucky, the future of Internet development will mirror some of its past, embracing new challenges with creative, collaborative solutions.

GCIG_Paper_No7-frontpiece

A visible product of my “self-funded sabbatical” is now published!

On the Nature of the Internet, by Leslie Daigle

My aim and hope is that it will provide some further insight into what not to do to the Internet intentionally or inadvertently,  so that collectively we can agree on the need to find better ways of dealing with the very real policy issues that need to have solutions.

The Internet has proven itself highly accommodating of change over the decades — today’s Internet looks nothing like the network of networks that existed 25 years ago, when commercial traffic was still prohibited from traversing it.  But, most of the changes that it has faced have come from technological or direct usage issues.  In today’s reality, many of the forces at play on the Internet are direct or indirect outcomes of (government and regulatory) policy choices.

If we want to continue to have a healthy and evolving Internet, we need to learn how to make policies that are consistent with, or at least not antithetical to, what makes the Internet work.

So, when I was asked last year to write a paper on the nature of the Internet for the Global Commission on Internet Governance, I turned first to the work we’d done at the Internet Society on the “Invariant Properties” that are true of the healthy Internet.   In the paper that I wrote for GCIG, now published as their 7th paper for this commission, I tackled the questions of policy choices that are driving us towards national networks and localized abuse of Internet infrastructure, through the lens of those 8 invariant properties of Internet health.

Here’s the executive summary:

This paper examines three aspects of the nature of the Internet: the Internet’s technology, general properties that make the Internet successful and current pressures for change. Current policy choices can, literally, make or break the Internet’s future. By understanding the Internet — primarily in terms of its key properties for success, which have been unchanged since its inception — policy makers will be empowered to make thoughtful choices in response to the pressures outlined here, as well as new matters arising.

Have a read of the paper, and let me know what you think — other examples of policy driving us in the wrong direction?  New approaches to policy-making that will help us solve problems and have a healthy Internet?  I’d love to hear your perspective, and — more importantly — see a broader discussion develop around different perspectives.