Send in your ideas. Deadline February 1, 2025
Story
Interview :
interview.html
Website
More info available :
http://markburgess.org/trustproject.html
Grant
Theme fund: NGI Assure
Period: 2022-12 — 2023-12
More projects like this
Middleware and identity

Mark Burgess - Promise Theory

Measure on-going trust between interacting agents

icon: silhouette of a human Middleware and identity

Profile picture of Mark Burgess.

Can you introduce yourself and your project?

My name is Mark (Burgess). I'm not related to the famous spy. I've been a theoretical physicist and a Professor of Computer Science, and latterly a startup entrepreneur in the area of network and system management for the past 30 or so years, and now I advise and consult as a sort of odd job man in technology. I've written a few books on various topics related to that too.

Over the years, I've been fortunate to be involved with some of the major challenges in cloud computing, from configuration to networking and edge computing. As Troy McClure would say, you may know me from such movies as "CFEngine for Home Improvements" and "The Return of Promise Theory". I try to maintain a sense of humour about it and I've been involved both in fundamental research as well as Free and Open Source Software development for all of that time. Quite a mess really! (Laughs)

What are the key issues you see with the state of the internet today?

The Internet is an incredible phenomenon, isn't it? We know a lot about certain aspects of it - mostly the technology - but almost nothing about others. In particular, I think we tend to focus just on building whatever we feel like, pressing ahead with technical issues and being generally disparaging of attempts to understand the impact of what we make on human society.

People want to make money, get rich or famous, and so on and social impact gets swept aside in the gold rush. This has been a kind of hobby horse for me since the millennium. I wrote a book back then called Slogans which predicted the rise of social media and how instantanous access to information would undermine democracy and law and order in society on a basic level. I don't like to be right about that, but I think we see it happening before our eyes.

The race to develop what we're currently calling "AI" is a similar story. There are too few of those who make technology who care to think about its impact on the world. For instance, money is basically a network technology from ancient times. The Internet is basically an extension of the money network. If we want to understand the Internet, we need to look at things like the history of money too.

How does your project contribute to correcting some of those issues?

The project Trust semantic learning and monitoring is part of a wide ranging effort to understand trust in network socio-technical systems. For many years now I've been - at least trying - to develop an understanding about how individual "agents" behave when they get together in numbers, building from the bottom up, and the implications of how it all works. It's more or less what's called Promise Theory today.

It started out by me wanting to understand computer networks, but I quickly realized that it's also the way to put social science on a more theoretical basis too. One of the issues that pops up in both cases is the role of trust and how trust and promises relate to one another.

Some years ago I wrote a kind of position paper suggesting that trust might work as a kind of common currency for social systems, just as energy is a currency for physical phenomena. I've seen how the concept of trust is used and abused in Computer Security, for instance. Technologists realize that too much trust could lead to risk and so they invoke that old binary ploy of saying - okay, it's either yes or no, one or zero, let's get rid of trust and have zero.

So Zero Trust became a marketing slogan. But that's nonsense obviously.

First of all, it's just saying don't trust them, trust me instead. Without trust, nothing could work. So what I wanted to do was to see if we could apply Promise Theory to the issue of trust. What could we learn from it, and how could we test the ideas it brings up?

I realized we could use Wikipedia as a data source for answering (at least a few) questions about trust, because it's an open platform that traces the human interactions around editing pages. It's a great opportunity to learn something important from an idealistic project that's already been of huge benefit to humanity.

What do you like most about (working on) your project?

I like understanding how things really work. You know, when I started I imagined I might find something like the usual sort of feel-good story we like to tell about human cooperation. You know, we come together to help each other if we trust one another, Kumbayah. It's rosy and idealistic and very politically correct.

But interestingly, that wasn't the picture that came out of the study. It showed that people basically come back to something because they mistrust it, which sounds upside down, but it makes a lot of sense if you think about what grabs our attention. If you trust something too much, you're not paying attention. If you're not sure, you invest effort to watch over everything more carefully and that's costly.

But then there are also people we avoid completely because we don't trust them at all. So how can that work? It turns out that trust isn't one thing, it has two components. You can call them trustworthiness, which is our on going assessment of how reliable things are. And if we overcome a basic theshold of this probable reliability, which is informed by how well people and things keep promises we're interested in, then the attention part of trust comes into play and its driven by residual mistrust.

So there has to be some kind of `seed' that attracts our attention first, an alignment of interests. Then we figure out how carefully we want to keep watch over that ongoing relationship. There's a scale of semantics from attentiveness from basic curiosity to invasive body searches. Mistrust is the prerequisite for learning. So, when people talk about zero trust, they really mean the second part of it, about paying greater attention to detail. There's clearly a role for trusting less or investing greater attention in the sense of quality inspection and so on.

The implications of this are important for the bigger picture, not only the Internet. It's a bit like H.G. Wells' Time Machine. In the future, their society has become these two groups of beings. The Morlocks who do all the work underground, and the Eloi who trust everything to be provided for them and are pretty indolent. Given our reliance on smartphones to give us more and more at the push of a button, we could easily fall into that trap.

The Internet of Finance already tends to push us even deeper into the divide between `have' and `have not'. The changing demographics and the challenges around the future of human employment are all a big destabilising force on society around the globe - we don't feel we can trust enough. It makes people shut out the less familiar, and become more tribal in their thinking. I think we could easily underestimate the dangers of that. I hope we'll look back on it all with some circumspection and, apart from a few mistakes, we'll find a way to come back from to something more open and stable.

What trust and Promise Theory ultimately suggest is that our limited human faculties are the bottleneck. Trying to supplement ourselves with AI or machinery is an obvious answer to that, but it will only work for a few specialized purposes. The core of what keeps us together has to be constrained by our human capacity for relating to the world. Trust isn't a transitive thing. You have to trust technology if it's going to take over the job of mistrusting or monitoring something else. So you don't escape trust. It's trust all the way down.

Where will you take your project next?

Something interesting popped out of the study unexpectedly. That was that editing was bursty. It wasn't a continuous marathon, but more like a number of episodes. These episodes involved about the same number of people regardless of what they were working on. People would come, tussle a bit over some details and then get tired of it and leave. That suggests there is something intrinsic to all humans limiting their tolerance of mistrust. It's draining - expensive after all to argue with others.

This reminded me of Robin Dunbar's work on social group sizes and our cognitive capacity, and it gave the same numbers that he and his colleagues had found for conversational groups elsewhere. I realized that the key to understanding human social group numbers must lie in the dynamics of how people pay attention - meaning trust. I actually ended up contacting Robin and we've since written a couple of papers together showing how this argument predicts the group sizes in Wikipedia extremely well.

This work I've been doing on Promise Theory has been slow going partly because it's hard to find time to do research unless someone is sponsoring it. Over the years, it's taken me in all kinds of unexpected directions. One of the things I enjoyed the most was to be invited into the Agile Leadership community to apply promises to the issues of leadership: trust, authority, services, and so on. It turns out that we can put these loose ideas into a more formal framework and understand them quantitatively. For example, why do certain figures end up becoming leaders. Where does authority come from?

My colleague Jan Bergstra who helped to develop Promise Theory has also applied it to study accusations -something that's a growing issue in social media and politics. Accusation is something that immediately reduces trust, so it's a weaponised form of communication that we're seeing amplified by social media. As long as people did it in small circles, it was manageable. Now we're broadcasting accusations across the planet and the consequences are enormous and on a global political scale. We probably thought social media would be harmless gossip. I think David Bowie actually put his finger on it years ago when he told a disbelieving interviewer that it would change everything.
(Editors note: The Bowie interview can be found either on the geoblocking BBC archive or Youtube.)

How did NGI Assure help you reach your goals for your project?

I find it very hard to ask for money from people, but a friend of mine who had already applied and gotten funding recommended NLnet to me. What impressed me straight away was actually two things. First how smart and genuinely interested Michiel [Leenaars] was, and secondly (perhaps ironically) how much trust I was afforded to get on with the work without a lot of nonsense report writing and micromanagement which you get in EU funding and so on. Along the way, I tried to document everything and I always got good feedback and encouragement. That's quite unusual. So there's a sense of the organization wanting to help more than trying to bind you in some kind of project management straitjacket.

Do you have advice for people who are considering to apply for NGI funding?

Have a go. I don't know what else to say. I'm still a novice here, but it seems like a great opportunity in safe hands.

Do you have any recommendations to improve future NGI programmes or the wider NGI initiative?

You mean apart from funding more of my work? (Laughs) It seems to me that they've got this covered. I don't know what I could possibly add to what they do. They're professionals, specialists. We need to respect that.

Acknowledgements

Image: courtesy of Mark Burgess.

Published on September 12, 2024

The project Trust Semantic Learning and Monitoring received funding through the NGI Assure Fund, a fund established by NLnet with financial support from the European Commission's Next Generation Internet programme, under the aegis of DG Communications Networks, Content and Technology under grant agreement No 957073.


Logo NGI Assure: letterlogo shaped like a tag