People of ACM - Jon Crowcroft

December 17, 2024

How did you initially become interested in computer networks and distributed systems?

I joined the University College London Computer Science Department in 1981 when we were one of the first nodes on the Internet with access to the ARPANET via the Atlantic Packet Satellite network. I joined a research group funded by DARPA to work on interconnection technique, and helped run a measurement project looking at protocol performance between Italy, Norway, the UK, and the US over this complex system. The measurement system was itself a distributed system, using remote file systems to log results from the many viewpoints around the network—remembering that this involved geostationary satellite, so one component was 35,000 km away, while from Pisa to DC is also quite a way.

Will you tell us about the key technical advancements that allowed more multimedia content to be shared over the Internet? What are the Internet’s current challenges with respect to bandwidth?

The main thing that has let us use Zoom or Netflix has really been the relentless speedup of links and routers. We did a lot of clever work on adaptive playout buffers and loss concealment in end systems, and attempted to devise algorithms for scheduling packet forwarding to respect minimum bandwidth and latency bounds that flows might need to give decent rendering and interactivity. But in the end the network grew too large for us to be able to change the infrastructure significantly. The economics favored offering more capacity, over more complexity. That said, the challenge now is that there is a huge amount of diversity in deployed technology, so that half the world really does not have the capacity to enjoy remote real-time collaboration, education, entertainment, and so on, while the other half has more than enough. This is not a technical challenge however, but one of economic, regulatory, and policy dimensions.

There is a bandwidth crunch in data centers due to the emergence of the training costs of huge AI models. We may hit the physics limits of switching soon in that sector.

One of your interests has been ensuring data security in the cloud. In a recent video, you said that the big “public” cloud companies can’t protect your data. Will you elaborate why this is the case? You also discussed how the Turing Institute is partnering with the (UK) National Health Service (NHS) to use a “private” cloud that keeps data private while allowing researchers to learn from the data. Will you discuss how this works?

There are always vulnerabilities in systems, and many cloud providers have multiple tenants—some of whom might represent attack vectors on confidential data and compute. Private cloud may be more expensive, although at the scale of the NHS in the UK (which has for example 1M employees and provides healthcare to approximately 70M citizens) compute and storage and expertise represent a modest investment compared to hospitals, doctors, hi-res scanners, etc. However, we still employ very strict access control and have multiple layers of security for processing, especially for research where we offer Data Safe Havens which use secure enclaves and federated learning. As homomorphic encryption and secure multiparty computation technologies become more affordable and widely understood, we will look at using those too. Again, there is a regulatory question about healthcare (and financial data which is similar) where UK, EU, and US law mean that offshore processing is not allowed, so we need local provisioning in any case to meet that requirement.

What is another example of an important research area in your field that will have significant impact in the near future?

I’ve been pushing on the topic of decentralization of systems for a long time—Internet routing was ever thus—but now we have over-centralized the cloud, and are at risk from multiple places (not least, sustainability, but also natural disasters and adversaries). The Fediverse represents a departure from this trend with multiple layers of community-provided resources. Look at networks like Guifi in Spain or B4RN in the UK as well as social media platforms like Mastodon—these are also starting to diversify, so that they do not depend on one single “gene pool” of software, which renders centralized systems globally vulnerable. This sort of thinking pervades layers right up to diverse moderation used in social media platforms, mitigating capture.

As a member of the Steering Committee for the ACM Symposium on Computer Science and Law, what was the genesis for this annual event?

I was not involved in the creation of the Symposium on CS and Law from the very beginning but was invited by Joan Feigenbaum from Yale to join the Steering Committee right after the pandemic. I’ve been working with lawyers in London on Cloud+Law for just over a decade, as the UK has a unique perspective sitting between Europe and the US but being separate. We have studied quite a few dimensions of privacy, liability, sustainability, and the symposium represents a place where the sort of interdisciplinary work—where law informs CS and CS informs law—can be published. Prior to this, we found ourselves writing each paper twice—once for the law community and once for the computer science world. The next symposium is scheduled to be in Munich in Germany in 2025, which will expand the reach and visibility of this growing harmonization between the previously rather divided worlds.

 

Jonathan Crowcroft is the Marconi Professor of Communications Systems in the Computer Lab at the University of Cambridge. He is also a Visiting Professor at the Department of Computing at Imperial College London, and the Chair of the Program Committee at the Alan Turing Institute. Crowcroft is a leading figure in computer networks and distributed systems. He is credited with fundamental contributions to rural broadband, helping extend the Internet to multimedia, and founding the field of opportunistic networking.

Among his honors, Crowcroft was elected a Fellow of the Royal Society, a chartered Fellow of the British Computer Society, and a Fellow of the Royal Academy of Engineering. He was named an ACM Fellow (2002) for contributions to the design and analysis of network protocols and for technical leadership. He received the ACM SIGCOMM Award for Lifetime Contribution (2009) for pioneering contributions to multimedia and group communications.