Dr. Martijn Warnier graduated with a Masters of Science in Cognitive Artificial Intelligence from Utrecht University, the Netherlands, in the beginning of 2002. He did his PhD in the Security of Systems group at the Radboud University in Nijmegen. His research focused on Language based Security and the mathematical formalization of properties such as non-interference, confidentiality and integrity. For the last three years he has worked as a Postdoctoral Researcher in the Intelligent Interactive Distributed Systems group at the VU University Amsterdam. Since September 2009 he is appointed the position of assistant professor at the TU Delft. His current research interests, besides security, include the interdisciplinary research field of Computer Science and (Computer) Law, and Self-organizing and Autonomic systems. In his free time Martijn Warnier loves to be an actor in the theatre.
27th April 2009, Amsterdam
At the Intelligent Interactive Distributed Systems group at the Free University of Amsterdam, Martijn is one of the spiders in the web. When I entered the group as visiting fellow he made me feel at home. While arranging practicalities he would, in his own special shorthand way, make remarks about the research I was about to do. It seems that Martijn does not appreciate socially accepted thinking or questions, as becomes apparent in the interview hereunder as well. For a start he will defend his hard position in current debates, not at all willing to allow any 'soft' reasoning of the social scientist I am. Surprisingly, after he has cornered my thinking, he starts to be really interested in what is actually at stake. In the exchanges I have with him, the concept of 'interdisciplinary collaboration' does not describe what happened. It has been more a query into how two planets communicate. In such conversations the scientific, social and political world all connect and little by little space for thinking evolves.
Computer words are very different from the words human beings use, because computer words have very clear, specified semantics. There is no ambiguity according to Warnier. If you type the command he'll do exactly the same every time. While with human words, depending on the context, or the intonation, or the body language, they can mean a lot of different things. Logic words function in a structure and have a very precise meaning. In mathematics there is no ambiguity. If you give a computer a command, a word, you can trace back all the steps to see what actual gates are flipping at the processing. Computer systems can talk to each other and this can lead to negotiation, but in principle it is still the same complete reduction back to the CPU, the central processing unit, or the underlying network infrastructure. You can explain everything that is happening before it happens basically. In principle it is a closed system; there is no influence from outside. A user can influence the system, and doing so brings in the randomness, but what the system will do will be completely predictable. This would be the ideal situation.
Because of the current business models in software, we end up with very crappy software that we don't really understand how it works, argues Warnier. The current business model is "release first and bash later". So first make sure that you're the first to market and then you sent out updates on and on and eventually you get something that works, or maybe not. That is what Microsoft is doing. And there is no real demand by users for reliable software. Maybe in niche markets, if you want to assemble a rocket you do not want it to explode because a computer divided by '0' does not work or so. But there's no real market for it; that is why it is not happening. Many side effects happen as a result of this. For example users get frustrated when the hard disc fails or the operating system crashes and they have to reboot and loose half a days of work.
One of the reasons for the muddy software world we live in is, according to Warnier, that we don't even know what the real specifications are. By formulating specifications, which are about function and independent of users, quality control is supposed to be guaranteed. One tries to write down all the possible inputs and all the possible outputs and these are written down in real words; we want the system to be secure or we want the system to be reliable. But those words don't any longer have an exact precise meaning; they can mean lots of different things. We don't really know what we want our system to do, says Warnier. Nobody takes the effort to really find out. It's very complicated to get these things right, and as long as there are no people demanding these things and are willing to pay for it, it is not happening. There's no need for it because people will buy the crappy software anyway because there is no real alternative.
Nevejan suggests that it is an interesting concept for witnessing, to focus on conditions and specifications as two things you have to create for a system to work. There are conditions that you witness and to which you can be witness if you have some specifications. Otherwise you can't see the conditions. Warnier may agree or not, because these are definitely not his ideas but Nevejan's, he states.
For Warnier systems are tools in the first place. He would not ever call them participants. Maybe abstractions, but systems themselves don't have any power. Systems give people a context or a structure, but the computer system itself can't force them to do anything, or anybody else either. If worse comes to worse systems can be closed them down and people would use a different tool to communicate. For these reasons Warnier argues that the system in itself is not an actor. Also, he argues, systems cannot witness anything because witnessing implies a form of consciousness that systems do not have. They observe, they monitor, but they do not witness. You interact with a system, but the interacting doesn't imply witnessing.
Of course, it is possible for a user to project witnessing to a system because a user interacts with a tool and the tool is very complex and most users have no idea how it works basically. Warnier thinks it is a survival tactic of humans to try to humanize the system, to try to give it more human qualifications. This is something that designers exploit, he argues, trying to make the system more humanlike because it makes it easier for humans to interact with systems.
The systems themselves are not active, they are just tools that are exactly the same as a TV or a hammer, you can use them to do a lot of different things, but they don't do anything themselves. Warnier himself uses systems to communicate with other people, to do simulations and to do experiments. But he could also use pen and paper to do a lot of these things, he argues. He feels that the technology does not change him, the way a potter is also changed by clay for example as Jogi Panghaal decribed. He makes the technology do what he wants because he understands the technology he is able to do so. Most people don't want to understand how everything exactly works and why should they? They are using a tool, Warnier argues.
Nevejan replies that we need to better understand systems because we live in a world where they have become very important; they interfere in more personal spheres than any tool ever before, even in the body.
Witnessing each other between human beings sets the terms for interaction. Human beings witness and are witnessed by systems and these systems influence possible survival more and more. Systems operate with a speed and scale human being's cannot compete with. It's more micro than ever and more macro than ever. Nevejan argues that the basic way we witness, that we perceive what we see including our understanding of this, is changing profoundly because of systems. So it is of vital importance to understand the relation between human beings and systems better than we do, she argues.
Warnier answers that systems are also very stupid. Systems themselves are not the real danger; it is in the people who are using the systems. Also he argues, a lot of changes we now attribute to computers and distributed systems for example, are changes and feelings of uncomfortability that were also triggered by earlier media and technologies. Our grandmothers have seen huge changes happening in technology. He agrees though that the invasion of the personal sphere is much larger than ever. But for Warnier this is an issue of political concern for him personally and less an issue of scientific research.
Warniers work is basically thinking and publishing these thoughts, which is why he only needs pen and paper to do his work. Technology does not affect his way of thinking because he usually thinks in higher abstraction levels, in algorithms, information theories and mathematics. For Warnier the most interesting part is formalizing ideas and abstractions into mathematics. He tries to make ideas crystal-clear by formalizing them completely and enjoys the deep insight in how information processing works for example. There are beautiful mathematics and very ugly mathematics. Warnier appreciates beautiful mathematics in the same way as he can appreciate art for example. Beauty and elegance are important in trying to understand how things work is about.
Of course Warnier has to think about how his abstract ideas can be applied in some specific context. Currently for example Warnier and colleagues are trying to get spikes out of energy production in larger energy systems. However, a lot of the systems are mainly crappy realisations of beautiful mathematic concepts. Technology is independent of these; it is this realization of some of these ideas but mathematics is basically a separate world, a platonic heaven. In Warniers work users are not important; they are way too concrete. Warnier and colleagues think about the algorithm to do some simulations and than somebody else will use it in the real world. It's a whole other process.
In mathematics there are no values as there are values in communities of people. Values like trying to be polite to each other, smile if somebody makes a joke, listen to other people, respect them etcetera, are completely independent of the system. Of course science can be used for good and for evil and as a scientist one has a certain role in this. According to Warnier one should try to at least envision the things you come up with can't be used for evil. However, to have moral values in a system does not make sense to Warnier. Moral values are for people. The only value he cherishes in his work is to create good quality and one can argue whether this is a moral value or not. In the end people will do with the tools whatever they want to do with the tools. So if they can abuse it for something, they will do it. And it can help other people that will also do it. It is independent of the system, I think. There are always people using systems for abusing others or helping others, and for anything in between.
Computer infrastructure is not much more then a couple of bites in one way or another. So there's nothing there that lives. Time, rhythm, place and culture are all human concepts that people can perceive but actually they are human projections onto the system and they are not there, Warnier argues. If you take humans out of the equation, there is just the system and the system is a tool. If one uses a very high-tech fast computer-network, or one uses smoke signals, at both one can do confidentiality or transparency for example, because both are basically just carriers of human information.
There are a lot of practical reasons why you need to think about the time and space of your system: for example the universal time between computers so they can synchronize and physical structures deteriorate after some time. Also, mathematics as action can be understood as transformation. Or in other mathematics relations are defined and for example operating systems will be able to recognize each other or not. It is not possible to compare human beings with systems; human beings are too complex. The best you can do is to make an abstraction of a human being, and compare that to a system or equation or what ever, since don't understand how human beings really work, argues Warnier
Even though it is not his ambition, Warnier agrees if you want to design a good system that people are going to use, you really have to think about how people want to use their systems. A lot of people don't understand how a system works in the first place, the tool is very complex, and people do not want to know how it works. Like they do not want to know how their car works as long as it drives. Designers and all others who make the system, try to make their system more humanlike because it makes it easier for humans to interact with systems. Doing so though, they are actually blurring the issue of using a system that is a tool.
This blurring between tool and intelligent system is more and more a problem argues Warnier. Systems cannot witness, but systems become more powerful and people connect differences to each other and find new ways to exploit data that are already in the system. The data become more accurate and more things are interconnected. It can lead to some good things but can also leads to abuse. There are the European laws, like the European Data Directive, that defines a lot about what companies and governments actually are allowed to store, which kind of information, for how long, and that they have to inform people. Nevertheless, very few people understand what is at stake. Warnier thinks though that changing the research agenda will not make a difference. He does not believe that current society can inform research to what it should and should not work on. One cannot prevent people from studying and inventing new stuff. Also, researchers themselves do not know what the results of their research or the implications of their research will be in say in fifty years time. When discussing the example of the child dossier that is recently introduced in the Netherlands, Warnier argues that the systems engineer or the computer scientist, or the mathematician, can not be responsible for how a couple of civil servants decide to use a system.
The following is an edited transcription of the conversation. Film fragments of the conversation are included to illustrate parts of the transcribed text.
CN: Today we're here with Martijn Warnier. It's the 27th of April 2009 at VU University in Amsterdam. Welcome, Martijn, thank you. Can you shortly tell us what you do?
MW: I'm a post-doctoral researcher at the university. I research the supervision of PhD students and Master students and teaching classes. My research is quite broad, my background is in security but I'm now more working in self-organizing systems and autonomic computing.
CN: I read only the first two chapters of your dissertation, most of it is completely inaccessible for me of course. But what I find very interesting is that you focus on language-based security. So, tell me, how do words act?
MW: I have no idea what you mean by this question.
CN: I mean in programming, when I go in the street I can talk to anyone. But in your field, a word can be comment, a word itself can execute something, it can make something happen somewhere else. So there are all these things happening because of the words and the logic in the words. And then you can even have words to prevent the words from doing what they want to do.
MW: I don't think we call them words. Computer words are very different from the words we use, because computer words have very clear, specified semantics. There is no ambiguity. If you type the command he'll do exactly the same every time. While with human words, depending on the context, or the intonation, or the body language, they can mean a lot of different things. So I really think they're two different concepts.
CN: Okay, but your words, if I just call them like this from now, your logic words, they also have meaning because they are in a structure.
MW: Yes, absolutely, but they have a very precise meaning. Give them a mathematical meaning and there is no ambiguity. If you give a computer a command, a word, you can trace back all the steps to see what actual gates are flipping at the processing.
CN: So what does it mean when you said in a conversation before that all systems can negotiate, all the time.
MW: I am not sure what I meant when I said that. Computer systems can talk to each other, that's for sure. Talking can lead to negotiation.
CN: I try to find out where there's room for witnessing. If you say a logic word is very precise, so there's no ambiguity what so ever, so there's no way to understand it, it's just something you have to know. But when you say systems can talk to each other, there's also no ambiguity there?
MW: No, in principle you could still do the same. There's a complete reduction back to either the CPU, the central processing unit, or the underlying network infrastructure. You can explain everything that is happening before it happens basically, you can predict precisely what is going to happen.
CN: Because it has no connection to the outside world.
MW: Well, yes, in principle it is a closed system. So there is no influence from outside. You as a user can influence the system, so you bring in the randomness but what the system will do will be completely predictable.
CN: You write that there are conditions and specifications, and you actually say both the conditions and the specifications are completely predefined by the developer or the designer.
MW: That would be the ideal situation.
CN: So, what goes wrong?
MW: A lot of it has to do with the business model in software. The current business model is "release first and bash later". So first make sure that you're the first on to market and then you can sent out updates on and on and eventually you get something that works, or maybe not. That is what Microsoft is doing. And there is no real demand by users for reliable software. Maybe in niche markets, if you want to assemble a rocket you do not want it to explode because a computer divided by '0' does not work or so. But there's no real market for it; that is why it is not happening.
CN: So what did we end up with?
MW: We end up with very crappy software that we don't really understand how it works.
CN: And what is the effect of that?
MW: I think there are too many effects to really give a precise answer to this question. But I see a lot of people get frustrated for example, when they work with a computer. Because the things they have been working with for quite some time then suddenly get inaccessible for example, because the hard disc fails or because the operating system crashed. Then they have to reboot and lost half a days of work.
CN: So actually you're describing, and I try to understand, that there is a system where every word has a precise meaning, no ambiguity, in context of semantics where there's also no ambiguity. But yet, conditions & specifications have not been met. So these words start blurring.
MW: We don't even know what the real specifications are. The specifications that we have are in terms of real indeed words. We want the system to be secure or we want the system to be reliable. But those words don't any longer have an exact precise meaning; they can mean lots of different things. We don't really know what we want our system to do. Because we don't write down these nice precise specifications. So we don't know what they do.
CN: How come we don't know?
MW: Because nobody took the effort to really find out. It's very complicated to get these things right, and as long as there are no people demanding these things and are willing to pay for it, they're not doing it.
CN: So there's no need for us to make specifications?
MW: That depends on the situation, but in general, no, there's no need for it. People will buy the software anyway because there is no real alternative.
CN: And on the side of the conditions?
MW: I don't really know what you mean with the conditions.
CN: I just take your words.
MW: You show me my thesis because I don't think I've ever used the word 'conditions'.
CN: I do not want to go into your book now, but I will show you later.
MW: I don't know what you mean with conditions. It could mean requirements, but I don't think that's what you mean.
CN: I just take a condition in the way I understand it. What I thought was very nice of conditions and specifications, as two things you have to do for a system to work, is that there are conditions that you witness, to which you can be witness if you have some specifications, otherwise you can't see the conditions. So I thought that's a very interesting concept when you think of witnessing.
MW: Yes, but I think you should get the credit for that, because these are definitely not my ideas.
CN: Okay, let's go back to the situation of the systems. So we have here an ideal, which came out of needing money. And now there are all these words that act and we don't know what they do anymore. So then I read that you are working on security as non-interference, which means that the public information should not mess with confidential information.
MW: Well, yes in general, you can observe public information. You can send public information into a system, but as long as you don't know, they shouldn't interfere anyway what so ever with confidential information. So you shouldn't be able to observe, or only observing public information, and somehow get hooked into the secret information or confidential information.
CN: So I was really curious about this, because if we talk about communities of systems and people, because systems have a lot of executing power, they've a great influence on what happens.
CN: So there are participants in their own right, yet ...
MW: I would not call them participants, I'd rather call them tools, or maybe abstractions of, but these things themselves don't have any power.
CN: How do you mean?
MW: They give people a sort of context, a sort of structure that they have to confine with them, but the computer system self can't force them to do anything, or anybody else either. If worse comes to worse we all close them down and we'd use a different tool to communicate. So I don't think the system in itself is an actor.
CN: What are the reasons you think it's not so? I mean even going through a frontier, a border at the airport, you go trough the systems.
MW: But the systems themselves are not active. They are just tools. They are exactly the same as a TV or a hammer, you can use them to do a lot of different things, but they don't do anything themselves.
CN: What does the system do to you? Because it's like, if I work with wood, the wood changes me. I change the wood; the wood changes me. If I am a nurse, I work with sick people, I mean, it changes who I am.
MW: It does influence, yes.
CN: How would you describe the influence?
MW: The way I use them, because I think that is the best way I can describe it, is that I use them as a tool. I use them to arise the typeset, I use them to communicate with other people. I use them to do simulations, to do experiments. But I could also use pen and paper to do a lot of these things. I don't...
CN: It doesn't change you?
CN: Then how can you change it?
MW: It's a tool so I can change it any way I want to. I can do, I mean I understand how the thing works so I can do with it what ever I want.
CN: There is this story of the potter, of Jogi, where the potter throws clay at the wheel and he can only throw clay in the centre if he is centred. So the wheel centres the potter and the potter centres the wheel. Then he makes a pot, which is a centred pot. The centred pot goes to a woman, up to her head, and then the pot, the centred object, centres a person. So it creates centred communities. What do you think? If it's not a pot, but it's technology or a system.
MW: For me it's still a tool.
CN: But do you need to use the tool?
CN: You need nothing to use the tool?
MW: The only thing I need for my work is, in principle the only thing I need is pen and paper. A lot of things are more convenient with a computer, of course.
CN: So actually you're using your brain, you're thinking.
MW: Yes, that is what I do.
CN: So, how does technology change your thinking?
MW: I don't really think it does, because I usually think, I'm thinking in higher abstraction levels. I'm thinking in algorithms, information theories and mathematics. Technology is independent, technology is this realization of some of these things but it is ... Mathematics is basically a separate world, a platonic heaven.
CN: Platonic heaven, finally no mud... so let's go to ... no I switch ...So you are in your platonic heaven and you're designing systems, that we will use.
MW: I don't think that's a correct characterization. In my work I never think of users. Users are in a way not important for me; they're way to concrete. I'm thinking in higher levels of abstraction, and users are just an annoying set I don't really work on. And also I don't build real systems for people, I come up with ideas that other people might use to build real systems, but I'm never really involved in the real systems the people are using.
CN: So can you give me an example of such an abstraction?
MW: Well for example the non-interference. I can make a very nice characterization and classification of different forms of non-interference in the mathematical theory and then I can give a framework where you can prove certain times of non-interference within a mathematical framework, and I can show you how this can result in a tool that can be used for proving non-interference properties of programs. That's the most I do. I don't look at how people are going to use this tool, what they are going to do in practice. For me the most interesting part is in formalizing the ideas, formalizing the abstractions and mathematics and in a way trying to get my own ideas all clear, crystal-clear, by formalizing them completely.
CN: So how do you choose the ideas that you formalize?
MW: By reading other people's work, of course. You always work with inside a community, a scientific community. Talking to other people, where they are working on. Looking at what other people have been doing; seeing where you can approve stuff; seeing where ...
CN: Sorry, are you concerned that ideas that you make will manifest in the real world?
MW: No, that's not something I'm concerned with at all.
CN: So when it goes wrong, or good, you don't care?
MW: What do you mean with wrong or good? Is something only good when it's realized the real world?
CN: No, I mean no, I mean the effect. So for example, non-interference, can, in certain situations, be a fantastic thing, in other situations, it's a nightmare.
MW: Yes. What you're talking about now is something completely different, for me at least. You're talking about, how, well, science can be used for good and for evil. Always. And I think as a scientist you always have a certain role in this, and that you should try to at least envision the things you come up which can't be used for evil.
CN: But I have to understand. I'm exploring how systems participate in communities of people. Whatever they are. Tools? They are tools with extreme, with a lot of power. So they chance our concepts of time, of distance, of place. I mean they are really changing a lot at the moment. So for me it's not satisfactory when you say "listen, there's a fantastic idea, but users are not important, I work on the concepts and what happens with the concepts I don't care". I mean that's also contradictory to how you, well, my intuition preceives you. So please, explain this better to me.
MW: OK. For my work, for me, if I do my work well, I have a lot of publications. That's basically my work, without the supervision and all the other stuff, I'm basically getting publications. To get publications you need to know how to write scientific articles and you need to have new ideas and proof of these ideas someway. But the most you do is something like 'a proven concept'. I'm not involved in getting technology in such a way that people can actually use it, which is a completely different process as well, and I'm not interested in that process and I don't really think about it either. In a way I have to, of course, think about how these things can be applied, where can these abstract ideas that I have be applied in some specific context, like in what we are now doing in for example the energy domain, where we are trying to get spikes out of energy production. So that is a nice application from how some algorithm we've come up with can be applied. We think about the algorithm to do some simulations and than somebody else, probably a team of hundreds of thousands actually have to do this. It's a whole other process. I always like it when things I've done, are picked up by other people and they improve on it or they do something with it. That's always nice, but it's not why I do it. I do it because I like these ideas and I want to improve on these ideas.
CN: So for you it's beauty.
CN: Pleasure and beauty.
MW: Beauty, elegance. I'm trying to understand how things work.
CN: What things?
MW: Trying to get better results basically. If you have an algorithm and you can improve upon this algorithm, for me those are fundamental. There's usually something like a fundamental insight behind it. For me that is a very deep insight, how information processing works for example.
CN: Now we switch to the next. As you know I develop this yutpa-model, this has four dimensions to define witnessing: time, place, action and reaction. If possible, I would like to discuss with you, what happens on the three main layers, infrastructure, middleware and applications. And mostly infrastructure and middleware; with these dimensions, concerning witnessing. So, for example, if we talk time, how does time function on the level of infrastructure and on the level of middleware?
MW: This is a very vague question.
CN: Well, what can be done with time? I mean infrastructure and time, at a moment is constructed. Can it have any rhythm? Can it have any sound, how much time can it live, does it have to be able to die?
MW: You're talking about a computer system now. Computer infrastructure is not much more then a couple of bites in a sense one way or another. So there's nothing here that lives, or has rhythm. These are all things that people come up with and project on a system like that. But it's not there.
CN: It is not there?
MW: If you get the people out of the equation, independently it's not there. People can perceive these things but that's more projection.
CN: Okay, it's not there. There's no beat to the system. So it's infinite. It's nothing then infinite, concerning time. It has no time?
MW: Of course, there's a concept of time in these systems. If you talk about computers again, it's usually handy to have a sort of universal time between these computers, that they're all having the same clock, so they have synchronization between different computers. All these computer networks are realised in physical structures. These structures are not indestructible, so some of them will break down or what ever. So there will always be deterioration, but usually what happens is that you replace a part and it keeps on going. But again if you take the users out of the equation, at a certain moment it will stop functioning.
CN: Let's talk place. Is there a culture in the systems?
MW: A culture? A culture is again a very human concept.
CN: Well, there are certain values, for example tracibility could be a value, or transparency. Confidentiality, non-interference is a value.
MW: But these are all human concepts, I mean if you put a human out of the equation again, why should one system be confidential towards another? Or not transparent. These are all human concepts.
CN: Yes, and also humans made the systems, right?
MW: Yes yes
CN: So these concepts may also be in there.
MW: I am not sure, for me it is still a tool. So if you use a very high-tech fast computer-network, or you use smoke signals, at both you can do confidentiality, transparency whatever, but it's both are basically just carriers of human information. So for me it doesn't really matter if you use this high-tech thing.
CN: But are you saying that there are other values in human things or there are no values in the heaven of math. Are there values there?
MW: No there are no values.
CN: And you think there should be values in the communities of people?
MW: Yes off course.
CN: OK, so if systems function in these communities of people, what values are important?
MW: That will depend on the system and the people who are using it. So if people form a community, there are all kinds of different values that are important, try to be polite to each other, smile if somebody makes a joke, it can be a lot of different things off course, listen to other people, respect them, etcetera. But again for me these things are completely independent of the system.
CN: So how does the system relate to you?
MW: As a tool.
CN: But when you think of your work?
MW: For me, al lot of the systems are mainly crappy realisations of beautiful mathematic concepts.
CN: And how do beautiful mathematical concepts relate to you, as a human being?
MW: I suppose the same way that I would value art for example. There are beautiful mathematics and very ugly mathematics.
CN: So time and place are no issue, we talk now action. The dimension of action, because we talk witnessing, and there is no witnessing happening in the system for you on a level of space or time. It's just there.
MW: Well yeah, in principle yes.
CN: But the system does things.
MW: If we tell it to do things.
CN: Systems wouldn't be there without humans. So humans are in the equation, right? But you say that time, place and space are not qualifying dimensions for a system. Only as far as people project it onto them.
MW: Well for me a system doesn't have a rhythm, and time and space are important, I mean there are a lot of practical reasons why you need to think about the time and space of your system.
CN: And if you talk about action, I mean, also in mathematical equations also things happen. It is about transformation, right? You have a certain input, it goes through the equation and there is an output. It is a way to talk about transformation.
MW: Yes that is one kind of mathematics, yes.
CN: And what other actions are there?
MW: In mathematics? There are relations, like how does one system relate to another. We see that an operating systems may be the same, or they look different.
CN: And what happens if you compare the systems, and one of the systems is a human being?
MW: You can't describe a human being in mathematics. Human beings are too complex to ... well, the best you can do is to make a abstraction of a human being, and compare that to a system or equation or what ever. We don't understand how human beings really work.
CN: I spoke to Aditya Dev Sood, a very well-known designer for Nokia and others, and he said, the only way to design good systems if you have ultimate empathy for the user, because in the details you will find the abstractions of the tools.
MW: Absolutely, I agree with this. But again, I don't see it as my ambition to..
CN: No, no, I'm just trying to understand...
MW: Yes, and I totally agree, I mean if you want to design a good system that people are going to use, you really have to think about how people want to use their systems. I don't know whether empathy is the word I would use, but yes, certainly something with that effect.
CN: Because then you get the real good specifications.
MW: No, specifications are about function and independent of users. With specifications you try to write down what all the possible inputs and all the possible outputs are of a system.
CN: So that will be the design for quality control.
MW: Yes, for quality control.
CN: And when we witness actions of systems and systems witness actions of us,I mean, can I even say this?
MW: Yeah, I am not sure if you can really say that. I don't think a system can witness anything. For me witness implies a form of consciousness that systems don't have. Therefore they cannot witness. But if you would say monitor then I would agree.
CN: That's true, very true. There is a difference between observing and witnessing is consciousness. So they do observe but they don't witness?
CN: But awareness can maybe be.., well, the question is what awareness is?
MW: Well, awareness, for me, presupposes consciousness, and awareness is something like observing and understanding, maybe, what is going on.
CN: So even when we have different systems observing and they are matched and out comes a complex feedback, you don't think this is a kind of witnessing. But as a user or as a citizen I don't know this.
MW: Of course, it would be very well possible that as a user you are going to project witnessing to a system. They do this already, off course with the computers, most of them with bad systems of course.
CN: So you think that most of what we actually attribute to systems is pure processes of attribution and not, well, there is no real interaction.
MW: Al lot of people don't understand how a system works in the first place. So it is a lot easier to make it more ... well, because you interact with a tool, the tool is very complex, you have no idea how it works basically. I think it is a survival tactic of humans to try to humanize the system, to try to give it more human qualifications. This is something that designers exploit I think, trying to make their system more humanlike because it makes it easier for humans to interact with systems.
CN: But actually they are blurring the issue.
MW: Yes, absolutely.
CN: So you could make a very strong argument against design. People don't know what they are dealing with anymore.
MW: Yes but designer aren't the only guilty in this...
CN: Oh, who else?
MW: The user, because the user in not interested in how things work; they don't want to know. How many people know how a car really works? or even a bike.? And something as complex as a computer?
CN: Who else?
MW: Well, designers have some kind of a bigger notion, because you have design and the usability design and the form, nice organic forms or others, but you also have of course the systems design like the blueprints of the system, which the users don't really see.
CN: So the engineers, the builders, are also guilty?
MW: Yes, off course and also the original people, who came up with it.
CN: So we are all guilty, you are also guilty. Only not the mathematical heaven..
MW: Haha.. The question is, what are we guilty of?
CN: Well, if you think of ..
MW: Yes blurring the issues, but a lot of people, I think people want to be a bit confused in these things in a way. They don't want to understand how everything exactly works, that's the main issue. I can explain everybody a lot more about computer systems then what most people want to know. But it is a tool, why should you need to know how exactly it works?
CN: Well because they are, because we live in a world where they have become very important. And where they interfere in more personal spheres than any tool ever before, even in the body. So it is a technology that has huge impact on identities, future, etcetera etcetera. And we hardly understand it, so I want a better understanding. Especially because, that's why I use witness, I don't observe the observation, because between humans, I mean, when I witness you, you witness me, it sets the terms before we even start interacting. And I am also witness of systems of which I may be mostly blurred by designers, developers, engineers and mathematicians. But nevertheless these systems influence my possible survival. So it is a relation that's of highly vital importance to me; to understand it, the relationship with the machines.
MW: Yes, relationship, but to understand your relationship, you don't necessarily need to understand how the machine works.
CN: Well the machine has so much influence over me, because the machine knows a lot about me, and systems talk and it combines it in a way with a speed and scale that I cannot compete with as a human being.
MW: Yes but the systems are also very stupid. If you give them a picture of ten people, even the best image-recognition program will have troubles finding the same person.
CN: I know, but I just need to say, I want to emphasize why I asked this question, and to you it is not an important question.
MW: I think, again, the systems themselves are not really the place where the real danger lies in. It is in the people who are using the systems.
CN: But why don't engineers, or why don't you invent algorithms that make it clear to me with what I am dealing. For example, why don't you make a special data-thing for me that I can always control all my own data, for example.
MW: My personal answer would be that there is an engineering problem and I am not interested in engineering problems.
CN: It is a conception problem, also. Like if I teach you a song, we both know the song. And if I hand you a book, it will be yours. And now the teaching of a song is treated like a book and I lose my own song.
MW: I don't think I get your point here. You're saying in sort of an accusation tone that 'why don't you do this thing'. I think if enough people really want this, somebody will do it.
CN: No I just mean to find a way to enter into your universe, that your work has effect on how communities live, now and in the future, and I find there is a lot at stake at the moment in how communities are shaped, because of the fact of systems that we hardly understandable what systems are. And how they change our concepts, and they change also the concept of a human being at the moment, I could argue very well. Basic concepts of me are half changed, compared to my grandmother for example. The idea of length of life, I mean, many many many things.
MW: But isn't that the same for your grandmother compared to her grandmother.
CN: Maybe, yes, maybe not, because the scale has changed so much. And the influence of scale is very large, it's more micro than ever, more large than ever. And I do think that the basic way we witness, so that we perceive what we see, including our understanding of this, is changing profoundly because of systems.
MW: Don't you think, maybe I'm wrong in this, but something like the radio had a larger impact because it suddenly gave people a view to the whole world.
CN: Yes, I agree, radio was also great, or telephone, even telegraph, but like smoke signals . Yes I mean this whole history of media. But the pace and scale of collecting a distributing is as never before.
MW: Yes that is true.
CN: And also the invasion of the personal sphere is much larger.
MW: Yes off course, the privacy is one of the things I'm interested in.
CN: Why are you interested?
MW: Not so much in a scientific way, more from a political point of view. I think government in general already have too much data.
CN: But I am looking for the cue to this problem in the relationship between human beings and these systems. So the way systems witness human beings the way human beings witness systems.
MW: For me systems cannot witness, but systems become more powerful and people connect differences to each other and find new ways to exploit data that is already in the system. The data becomes more accurate and more and more knowledge in these days. And more things are interconnected. It can lead to some good things but can also leads to abuse.
CN: But you know, companies and governments, if there are no fundamental things changing in the system design, it will not happen.
MW: But, I mean, there are other ways you can do it. We have the European laws, the European data directive, that knows a lot about what companies and governments actually are allowed to store, which kind of information, for how long, and that they have to inform people.
CN: I had an uncle who was a nuclear physician who worked on fusion. I was at the time very much against nuclear power because of the nuclear waste problem. I had many hours of talking to this uncle, and in the end we came to the conclusion that he thought that nuclear fusion was the only solution for good energy, yes, he did agree with the nuclear waste, but he thought that governments, commissions and political structures can prevent misuse of this. And I said, I don't believe in the political structures so I think you should not do this research because mankind cannot handle the outcome.
MW: I think that's very naïve.
CN: What is naïve?
MW: That you think that you can prevent people from studying it for example.
CN: I think it is also very naïve if you think you can make systems that people cannot handle and it will solve itself.
MW: Yes, again I, yes, absolutely. It is the same with cars, it is the same with any tool. The wheel is probably also abused by the people, right? Or fire! So yes, any tool can be misused. But absolutely you cannot, I mean, from a lot of things we won't even know what the implications will be of the research especially, but also of the technology and the engineering. So how would you know what you should do and shouldn't do.
CN: So you don't believe that current society can inform research?
MW: I don't think the researchers themselves know what the results of their research or the implications of their research will be in say in fifty years time. I think most researchers have no idea, and if they claim they do, then I think they are lying. So how can somebody else say 'you shouldn't do that and you should do that'?
CN: That's not somebody else, it's like because you make an analyses of.. I mean, for example, now they start a child dossier. Children are born, they get a dossier. We are talking about privacy of children. It is one of the things that I really disgust. So there is no personal sphere for children, it is completely invaded from the day you were born.
MW: I completely agree with you that this is, well, I think that the people who thought of these things in general were well intended, because they wanted to things, like against child abuse for example, but I agree that it goes ten steps too far basically for what you want to do and what it results in. But in a way, how is the systems engineer or the computer scientist, or the mathematician, or somebody else, responsible for how a couple of civil servants thought 'OK we should do it like this'?
CN: Well I understand, but I can also imagine that, well, for example, the idea that data can vaporize by themselves. So the data can become thin air again.
MW: Well, if you put something on the Internet, you can forget it.
CN: No, I know, but imagine you can invent an idea of data that can destroy itself, within a specific amount of time. Then you solve already something.
MW: But these things exist already, you can use these things if you want to. Nobody uses them because they are complicated; nobody really sees the use of them. If you want to, I can give you a computer that has a hard disk that has sections for example where data will be stored for ten days and then it will be gone and lost forever. It is very easy to make these things. Nobody wants it. What you want is backups of your system, so you don't lose anything.
CN: So you actually argue that this whole idea of atomic systems and people is a big mistake, it is about people who do not know their tools. And they don't know what they need.
MW: Yes, and people will always abuse tools in one way or another.
CN: So witnessing is not an issue.
MW: Witnessing is something between people, but not between people and systems. I mean as a person you can witness a system, but a system cannot witness you.
CN: There is no interaction.
MW: You interact with a system, but the interacting doesn't imply witnessing.
CN: OK. Is there anything you would like to add?
MW: I liked the conversation. I am not sure if I gave you what you wanted to hear.
CN: That's very nice, you didn't give me at all what I wanted to hear but that is very good too. For me, what is alarming is that if you want to think about all the trust configurations or, I don't talk about confidential but about trustworthiness, so not against the attack but about beneficial of life, then you have to invent also other ideas of systems. Because the whole idea of transparency, trace ability and you know, the sequential idea of time, I think there is some fundamental ... I think it is like the colours for painting, I think they are way to small at the moment. Although they can do so much. But it has a scale, and if you want to work on that scale, you also have to invent other values as well.
MW: I think it would be nice if that were the case, but I don't believe it.
CN: What don't you believe?
MW: I don't think it is necessary to have these, for a better word, these moral values in a system.
CN: You don't have any values in a system.
MW: Well, I am repeating myself, for me it is a tool, moral values is something from humans.
CN: That's not true because you describe a mathematical heaven, which in the end, because of business models, creates muddy tools. So you do have an idea about quality.
MW: Well, yes, off course, but that's, well, do you think quality is a moral value?
MW: I think we can write down pretty precisely what that is without any vague personal..
CN: Well, if you think about quality of life for a child, or education, quality of food.
MW: Yes, absolutely of course. But of a system I can say that it will not fail for more than one percent of the time that it is running.
CN: Yes, but that is not the quality I mean. I mean, I like good food, so I like good systems.
MW: Well, that is a very subjective comparison.
CN: I don't think so.
MW: Yes, I think so.
CN: You really disagree, that's very interesting. I do think there is inter-subjective sense of quality.
MW: But a good system for you might be completely different from a good system for me. Just like good food for you, will be completely different from good food for me. Or could be potentially at least.
CN: But poisoned food? Do you know what I mean? So I think some of the values could, well, I am curious to see, what values are there?
MW: Yes, but I think people will do with the tools whatever they want to do with the tools. So if they can abuse it for something, they will do it. And it can help other people that will also do it. It is independent of the system, I think. There are always people using systems for abusing others or helping others, and for anything in between.
CN: Thank you
MW: You're welcome