Internet

Those of you who track the announcements of IETF Internet-Draft publications may have noticed a “draft-daigle-” document pop out in the flurry last-minute pre-IETF92 documents.  (ICYMI, the document is:  draft-daigle-AppIdArch-00.txt).

Related to the work I’ve been doing in bolstering the content of “The Attic” of applications identifier technology history, I started to think about a general framework to describe applications identifiers.  So many times we’ve been through the same design discussions — it would be nice to capture the state of the art in tradeoffs and design considerations and simply move forward.

Future versions of this or other documents are intended to delve more deeply into questions of design choices, as well as the broader question of applications architectures (which are uniquely tied to identifiers, content, and resolution).

That’s the theory behind the draft.  It is an “-00” version, with all the draftiness that implies.  My hope is that it will stimulate some discussion and feedback.  I’d love to hear your thoughts — comment here, send me an e-mail, or catch me in Dallas at IETF92.

The long-standing and generally-held belief of the Internet community has been that the Internet’s governance should be based on a “multistakeholder” model.  Whatever you may surmise is the proper definition of that word, we should readily agree it doesn’t mean that a single government, any single government, should have override control of major swaths of the Internet or its support functions.

This year, there have been constructive community steps towards reducing the Internet’s dependency on a single nation, as well as a variety of reminders why it is important to make that effort successful.

Hence the global satisfaction at the NTIA’s announcement in March 2014 that it would seek a multistakeholder-model-supporting proposal to transition the NTIA (and the US government) out of its oversight role for the Internet Assigned Numbers Authority (IANA).    For many, this has been a long time coming — certainly, the Internet Architecture Board, on behalf of the IETF, has been signalling (to the NTIA, publicly) its concerns about the IETF’s lack of control over its own standards’ parameter assignment since at least the days when I was the IAB Chair.

The communities that have actual responsibility for managing the names, numbers, and other protocol parameters have, since March, stepped up to engage in developing the pieces of the requested proposal.   These are not random strawperson proposals to define a new Internet or governance system:  the communities involved are dependent on the IANA function for getting their own work done, and the focus has been on ensuring that the Internet’s naming, numbering and protocol development functions will continue to work reliably, responsibly and without undue interference in a post-NTIA-transition world.

For the protocol parameters part of IANA, the IETF’s IANAPLAN working group was chartered “to produce an IETF consensus document that describes the expected interaction between the IETF and the operator of IETF protocol parameters registries.”    From my vantage point as co-chair of the WG, I have seen the WG’s extensive discussion of the issues at hand, and watched the document editors do a sterling job of producing a document that will be the basis of the IETF’s contribution to the proposal.  With the WG’s document in last call across the IETF until the 15th of December (err, today!), the IETF is on track to have its contribution done by the January 15th deadline set by the inter-communittee coordinating committee.  (See IETF Chair Jari Arkko’s blog post for more details).

Just in case anyone’s energy was flagging before we finish the final details, there are timely reminders of why it is important to keep pressing on with defining (and realizing) the IANA in a post-NTIA reality.  As noted in Paul Rosenzweig’s article on Lawfare, “Congress Tries To Stop the IANA Transition — But Does It?”, a different part of the US government (the US Congress) is trying to stop the NTIA’s actions:

“Now Congress has intervened.  In the Omnibus spending bill that looks to be going through Congress this week the following language appears:

SEC. 540. (a) None of the funds made available by this Act may be used to relinquish the responsibility of the National Telecommunications and Information Administration during fiscal year 2015 with respect to Internet domain name system functions, including responsibility with respect to the authoritative root zone file and the Internet Assigned Numbers Authority functions.

(b) Subsection (a) of this section shall expire on September 30, 2015.”

Rosenzweig goes on to observe that the provision may well not have the expected impact, and might have more deleterious effects for the US.  Perhaps this is Congress attempting to use the budget process to stop the NTIA’s actions in their tracks; perhaps it’s just budget-jockeying on a scale not comprehended outside the limits of Washington, D.C.  But — It.Doesn’t.Matter.

Most of the Internet’s users do not live in the country in question, let alone have a voice in those discussions.  Nevertheless, they are impacted by the outcome.   Which is why the Internet community, which is global, and has solicited input broadly, is stepping up to create a future for the IANA that will:

  • Support and enhance the multistakeholder model;
  • Maintain the security, stability, and resiliency of the Internet DNS;
  • Meet the needs and expectation of the global customers and partners of the IANA services; and,
  • Maintain the openness of the Internet.

Clearly, that can not be satisfied with the control of any single government, as the US Congress’s actions remind us now!    The question is not whether the US government retains its historical role as contract-holder for the IANA functions.  The question is how to best meet the criteria thoughtfully laid out by the NTIA.

Post Wordle

 

Today is the official launch of a new ThinkingCat Enterprises project — InternetImpossible.   The purpose of the project is to capture, share, and raise awareness of  the many and varied wonders of the Internet. This ranges from its technology to its reach and its impact. Impact is noted on people, on cultures, on ways of doing things.

It’s a storybook.  And, like all good storybooks, it has lessons, or at least valuable learnings that should be remembered and shared.   The Internet is, in some ways, being taken for granted. Along with that ease and familiarity comes an increase in efforts to apply existing norms, processes and problem solving approaches.   So take a moment to review the stories.  Come back to read new ones.  And, if you’ve got a great story about how the Internet is impossible, or has enabled you to do something impossible, please share!  (Send an e-mail to “editor” at “internetimpossible.org”).

That’s it.  Why are you still here? 😉  Go check out http://www.internetimpossible.org .

IMG_4939-small

Yesterday, I re-tweeted Cloudflare’s announcement that they are providing universal SSL for their customers. [1]   I believe the announcement is a valuable one for the state of the open Internet for a couple of reasons:

First, there is the obvious — they are doubling the number of websites on the Internet that support encrypted connections.    And, hopefully, that will prompt even more sites/hosting providers/CDNs to get serious about supporting encryption, too.    Web encryption — it’s not just for e-commerce, anymore.

Second, and no less important, is the way that the announcement articulates and shares their organizational thought processes.  They are pretty clear that this is not a decision made to immediately and positively impact their bottom line of business.  It’s about better browsing, and a better Internet in the long run is better business.  And, they are also pretty open about the challenges they face, operationally, to achieve this.    That’s another thing that can be helpful to other organizations contemplating the plunge to support SSL.

So, go ahead and have a read of their detailed announcement — and please forget to come back and check if this website supports encrypted connections.   It does not :-/   (yet).  I’ve added it to my IT todo list — right after dealing with some issues in my e-mail infrastructure.  I asked the head of IT for a timeline on that, and she just gave me a tail-flick and a paw-wash in response.  Life as a micro-enterprise.

More substantially, I could easily become a Cloudflare customer and thus enable encryption up to the Cloudflare servers.  But, proper end-to-end encryption requires my site to have a certificate, based on a unique IP address for this website and the going rate for that, given where my site is, is $6/mo.   That adds, substantially, to the cost of supporting a website, especially when you might have several of them kicking around for different purposes.

There’s work to be done yet in the whole security system (economics) model, it seems to me.    Open discussion of practical issues and eventual work arounds does seem like a good starting place, though.

 

[1] http://blog.cloudflare.com/introducing-universal-ssl/

Time to introduce a new feature on the ThinkingCat site:  The Attic.

It is with chagrin that I acknowledge that I am an old enough <fill in appropriate but not-too-abusive-please ephithet> that many hot new technology standards discussions are ringing in resonance with the long, hard exercises I recall from years past.   In particular, many of the discussions around “information centric networking”, “named data networking”, and new ways to handle intellectual property rights intended for digital media are working through similar problem spaces.  When is a resource “the same” enough to be the same?  Et cetera.

From my perspective, there was a vibrant community discussion of those issues in the heyday of standardization of Uniform Resource Identifiers at the IETF in the 1990’s and early 2000’s.   There was a small core of that community that really wanted to push URIs to be more than just “web addresses”, and saw an application infrastructure standards roadmap.  That roadmap never got implemented — at some point we acknowledged that the implementing community was not as keen, and there’s no fun in defining standards that never get used.

I would like to believe that the ICN and other groups have the implementors with them, and enough interest in the outcome to solve some of these issues that are being revisited.  It would also be useful if we could somehow short-circuit the learning curve, and not tread through all the same sequences.

Perhaps that is a vain hope, but it is the spirit with which I offer “The Attic” — a place where I intend to post up various remnants of those discussions, as culled from my spotty archives (driven by my even spottier recollection).

Today’s  inaugural contribution is on “Contextualized (URI) Resolution — C15N” (C15N because there are 15 letters between “c” and “n” in “contextualization”… get it?  Hey, I didn’t say the humour aged well).  That work never got beyond the BoF stage at the IETF, but the same questions arise when we look at any kind of advanced information resolution.

This is an experiment.  If nothing else, I’ll have a somewhat organized version of my own archive when I’m done 😉    But, if you find this useful, let me know — I’ll be more motivated to add to it.  If you have suggestions — of content or format, I’d also be happy to know.  Feel free to leave a comment here, or email me (I’m “ldaigle” at this site’s domain name).

P.S.:  Apologies to Twitter followers for the double-tweet of the last posting.   I had set up an app to auto-tweet my blog posts here, because automation is So!Cool! and then decided I’d really rather handcraft my tweets — authenticity is important to me.  Apparently, I failed to stomp adequately on the auto-tweeter app.  More stomping has been applied — let’s see if this works better.   My Twitter account is, after all, my1regret …

“Internet governance” is one of those catchy phrases that people bandy about with the knowing assurance that everyone knows what is under discussion — or with a view to ensuring that crispness and clarity remain elusive.   The Internet is not random, nor even particularly chaotic:  there have been elements of Internet governance since the inception of the network.

The reality is that governance (as in management) of the Internet has existed and evolved to meet the needs of the Internet as it has developed over the last four and a half decades.  This started with the need to have (open) standards for interoperable networking and agreed norms for acquiring and using parameters in those protocols.  It evolved as availability of some of those parameters (IPv4 addresses) was inadequate for expected needs, especially given the original sizes of grants in allocation.

Even before the “g” in “Governance” started being capitalized,  the Internet community organized itself to have a global, yet regionalized, system for open development of formally implemented policies for management of IP address allocation.  Let me say that more directly.  Problem:  handing out chunks of address space was wasteful and leading to rapid runout of IPv4 addresses.  Solution:  the Internet community built bottom-up, open policy development institutions to manage the equitable allocation of the addresses that remained.  That worked so well that the deployment of the successor protocol with a massive address space (IPv6) was deferred for a decade.

While this approach to identifying and addressing problems for the Internet has worked well for those involved in developing the Internet, it’s not such a comfortable (recognizable, formal, predictable, <fill in the blank as you like>) for those who are on the outside looking in.  And those are the people who are increasingly impacted by the Internet and its use:  governments, law enforcement agencies, other businesses.  These worlds are colliding.

Tussle -- Worlds Collide, Internet Governance

I explored that concept and others when, in June,  I  gave a lecture for the Norwich University Residency Week conference.   I’ve posted my slides for the talk on my Publications  page (See: 20140618-NorwichResidencyWeekInternetGovernance-cc).

The 3 key concepts of the presentation were:

  1. Internet governance sparks fly when worldviews collide — as described above.
  2. The Internet knows no physical boundaries — it wasn’t built with a view to following national or jurisdictional boundaries.  Imposing rules and regulations on it forces an unnatural network topology with unhealthy side effects
  3. Internet governance should not only be about regulating technology and its use — for example, solving issues with abuse of “intellectual property rights” is more about getting agreement on what intellectual property is and how it should be handled than it is anything to do with networking.

As alluded to above, the definition of Internet governance (or Governance) has evolved over time.

  1. Making the Internet work through responsible construction and sharing
    • Original definition
    • Still see sparks of it – collaborative discussion of best paths forward in network architecting and operation
  2. Code for “management of critical Internet resources on a global basis”
    • International struggle to control the domain name system and/or IP addresses
    • Can the US pull the plug on a country’s Internet?
      • No
      • Country code domain name (e.g., .br for Brazil) relies on the DNS root zone file
  3. Physical world governance meeting and incorporating the Internet and its uses
    • As the Internet becomes increasingly part of our lives, it’s hard to separate “governance of the population” from the Internet

The Internet was not designed as a single-purpose, coherent network – it doesn’t even notice national boundaries.  That, in fact, is what gives us much of what we love about it.    So, increasing regulation of the wrong things could break what we love.

  • Forcing networks to line up on national boundaries
  • Regulating the Internet when really it’s some service that you wanted to focus on (e.g., “telephony”)

At the same time, there are key issues that need regulation in order
to foster an orderly future for all.  So, we all need to address the tussles when worlds collide, and figure out how to do it right.

Internet Governance — if you’re into it, you’re all over it.  If you’re not into it, you probably think it’s somebody else’s problem.   But, the issue with that thinking is that governance (note the small “g”) of the Internet was specifically designed to be the business of everyone who uses and builds it.   The further away we get from that mentality, the more the Internet becomes an industry-driven product and not an inter-network.

Such was the message I delivered when, in June, I gave a keynote lecture to introduce the graduating class of MSISA (Master of Science in Information Security & Assurance) at Norwich University to the rudiments of Internet governance.  I’ve posted my slides for the talk on my Publications  page (See: 20140617-NorwichResidencyWeek-MSISA-cc).

The Internet has so infiltrated our daily lives that it is changing how we go about many aspects of our non-digital lives.  Just imagine trying to buy a house, the most physically-rooted, tangible object many of us aspire to owning, without having the resources of the World Wide Web.  It’s not just the realty sites — you probably also want to review the local schools, perhaps check out the social and civic activities in the area, and generally inform yourself with what people who live there have to say.

Many of those resources are available because the Internet allows “innovation without permission“.  Concerned citizens and enthusiastic locals who never would have thought of themselves as “content publishers” can readily set up information resources.  (Seriously — I can check out the food safety inspection report for our local grocery store online).   Of course, the World Wide Web itself is the poster child for the value of allowing innovation (on the Internet) without requiring permission.

The Internet’s management, or governance, has grown up over the decades of its existence.  No longer uniquely the purview of a handful of (primarily US) researchers, the Internet’s developers/deployers/users have set up open institutions to engage successive generations of Internet supporters in the process of thoughtful management of its resources.

Understanding the impact and value of the existing institutions, as well as ensuring the Internet’s users don’t become a simple “audience” to its services, are key challenges of evolving the Internet’s governance in the face of today’s political pressures.

RIP, Privacy?  I certainly hope not.      The dictionary.com definition of privacy includes “the state of being free from intrusion or disturbance in one’s private life or affairs “.      In that light,  the suggestion that “you have zero privacy anyway, get over it” (attributed to Scott McNealy) is kind of scary:  if we’re trying to make the world a better place, we should have fewer intrusions and  disturbances in our private life, rather than expecting that they should be the norm.

What we’ve seen over the last decade is an explosion in the exposure of personal data, due to:

  • data being shared in electronic form (especially, via the Internet)
  • cheap computer storage making it feasible to collect and retain massive quantities of data
  • cheap, fast computing that facilitates processing the masses of data to draw correlations and inferences

While privacy has traditionally  been achieved through confidentiality of data, the factors above have outstripped our rational ability to develop appropriate means to act given how little data exposure it takes in order to lead to an impact on privacy.

In early June, I gave a talk to a George Mason University class, “Privacy and Ethics in an Interconnected World”  in the Applied Information Technology department.   The assigned subject of the talk was “Regulating Internet Privacy”.

In preparing for the talk, I want through 5 “case studies” of data exposure and privacy impacts:

So-called Public Data:  E.g., is it okay that everyone knows how much you paid for your house?

  • At least in some jurisdictions, it is “public data”
  • It does help inform  future buyers in the area

But — it can create tense times with your family, friends and neighbours, which is most certainly an intrusion on your personal life.

And then along comes Zillow and takes individual pieces of public data to paint a picture of your neighbourhood, which paints a whole different picture of you,  your worth, etc.

Personal Data in Corporate Hands:  Your usage at a particular service means you wind up sharing personal data.

  • In part, this is an inevitable consequence of carrying out a business transaction.
  • An argument is that it helps tailor your service to your interests.

But, when a history is maintained,  your usage is tracked,  and data is sold to third parties, the implications may catch you by surprise.  (Should you be denied health care coverage if you don’t walk 10,000 steps a day?).

Identity Data:   Data that identifies you personally undermines any opportunity for  anonymity.

Freedom of speech means different things in different parts of the world, and being able to voice an opinion without fear of repercussion follows with that

Accountability:  On the flip side, some ability to attach actions to individuals who are responsible for them.

  • When someone does something bad on or to the Internet, it should be possible to track them down
  • Of course, “something bad” is in the eye of the beholder

This is, of course, the complement of the desire to be able to provide anonymity.

Pervasive Monitoring: collecting any and all data about Internet connections, irrespective of source or destination or accountable person.

  • Governments demanding access  –to metadata of Internet connections, and sometimes content.
  • From reports, they have no a priori reason to track you, but it’s easier to collect all the data and figure out what they want later

But, inferences from data mining are not always correct, and if you don’t know they are being made, you have no recourse to fix them.

One thing that all of the points above have in common is that the data is being used for purposes beyond which people originally understood or expected it to be.

It’s pretty hard to function in today’s society without sharing some data some of the time — so complete confidentiality is not an effective option.   But being cautious about what you share, when, is necessary in this day and age.  The Internet Society has some useful guidance on that front:

And the other side of the coin is making sure that the data that is shared is treated appropriately.

And we should, indeed, be able to “rest in peace” — peace of mind that our lives are not being undermined by misuse of our data.

 

My first day back at the office after a summer of working remotely featured a traffic jam of the sort that reminds me why I hate commuting:  one car crash, a key highway closed, and no reasonable surface road alternative routes.     There’s just nothing to do but suffer the consequences when that road backs up.

I had an early team meeting and was already scrambling to leave the house with a buffer of half the regular commute time.  It wasn’t going to be enough.   I dropped a note to my team, who’d all be participating from their  locations (in other cities and countries), and warned them.

As I was driving to work, I thought about the fact that any one of my team, who know roughly where I live, and where the office is, could look at the Google Maps traffic status for the route and make a reasonable guess about my progress and likely delay.     That works because Google Maps is a World Wide Web resource, and is uniformly accessible to everyone on the globe.  That’s kind of a key feature of the Internet and its resources.

That kind of uniform access, where services don’t (in fact, generally can’t) pre-judge the boundaries of their service market, has been a hallmark of the Internet information age.  It has been the leveler of playing fields.  It has made obscure parts of the world accessible to all; kept people in touch with their home towns, opened small businesses to global markets.

The thought that chased that one through my brain was:  how different it would be if each of my team had to download a traffic map app for my area in order to be able to check on traffic status.  They wouldn’t do it.  In fact, who’s to say that the traffic map app for my area would even be available in the iTunes store of another country? (Since that model more or less encourages pre-judgement of your target market).

As we rocket into the future of Internet-as-seen-from-your-mobile-device, I think it’s an important issue to ponder.  Are we exiting the age of ubiquitous information and access?  Is that a good thing?