Thomas Quillinan


Biography

Thomas Quillinan is a Security Researcher at the D-CIS Research Lab in Delft, in the Netherlands. He received a Ph.D., in the area of Security for Distributed Systems, and a M.Sc. in Computer Science from University College Cork in Ireland. He previously held the position of Postdoctoral Researcher as part of the IIDS group in the Vrije Universiteit Amsterdam, working in the areas of Distributed Systems and Crisis Management. He originally trained in the University of Limerick in Ireland, where he received an undergraduate degree in Computer Engineering. His research interests include the security of systems. He is a member of the ACM and is an avid Sailor.

23rd April 2009, Amsterdam, The Netherlands

Thomas Quillinan is one of the colleagues at the Intelligent Interactive Systems Group at the Vrije University Amsterdam. He is involved in the EU project on crisis management and his insight in the collaboration between systems and people is informed by this work. Every time when I enter the office he shares with Martijn Warnier, his friendly and cheerful attitude strikes me. Little jokes, bits of information, a remark here and there. Some of the things he said while taking an elevator for example stick in my mind.

Summary

Quillinan, when asked abut his association with witnessing, makes the distinction between looking and seeing. Seeing refers to having concentrated attention where looking refers to a more general perception of only the barest details of an environment. If you are looking around in a crowd, you often just see the different heights in the crowd. However, when looking around people seem to pick out familiarities and recognize for example a friend in a crowd. 'Seeing' a crowd may involve focusing on each person's face and identifying how many women and men there are for example. When people look at a computer this distinction is important, argues Quillinan. Often people are looking at a computer but they are not really seeing what they see. Also, even more then seeing one needs understanding when interacting with a computer, Quillinan argues. Human-computer interaction, human interface design is very important for this reason. To understand what the difference is between what one person sees and another person sees is a particular problem for engineers and people who are designing systems. A lot of times when people are building computer systems, they are building it in the way they think it should be done. Even if it is a large organisation and there are defined guidelines for doing things, there is often an institutional memory about how things should be done. This is usually not intuitive for somebody who's coming at this for the first time.

Realizing the impact computers and systems have, the question that arises is whether, and if so in what ways, computers are intelligent. There are many standard definitions for intelligence. For example that intelligence reproduces or that intelligence self preserves. Those for example computers don't do. Quillinan argues that computers are not alive, and therefore they can't be intelligent, not in the traditional sense. Also the requirement that was formulated in the interview with Sunil Abraham, that systems to be participants in human communities should be capable to self destruct, requires intelligence according to Quillinan. To him computers are very interesting toys and tools, which can do an awful lot of useful things. They are not intelligent though.

Instead of having the computer adapt to the person, the person is supposed to adapt to the computer. People have different ways of communicating with a computer, but the computer has only one way of communicating with the user. Instead of both adapting as in two people communicating, who even if they don't speak the same language use certain universally accepted truths about communication regardless of the language that they speak, computers don't have that ability; they don't have the ability to adapt to any user at all. When people tune their presence to systems they have to train them selves into communicating in a way that the computer wants them to communicate. A person is forced to change, states Quillinan, that's the only way it can be done. Because systems have a lot of executing power, things only get worse according to Quillinan. Often times people will blindly do, what the computer tells them to do. And regardless whether it makes sense or not, they still follow it. A classic example is people who blindly follow their GPS-units to narrow footpaths because the GPS unit said to go there. People seem to suspend their own judgement. In crisis situations the trust in the computer only seems to ameliorate, as Quillinan finds in his research on crisis management.

The adaptation of human beings to computers can be understood as a result from the 'master and slave' model, which defined the design of computers and systems since the early days. They have been designed to slowly take over the position of the masters, and eliminating the slaves, by developing higher and higher levels of thought needed to interact with them. This can be changed, but how is very difficult. It would require a complete change in the mindset of how we use computers. Quillinan thinks out loud when he suggests that asking them for advise maybe, and then making your decision as a human being, would already create different systems. This would seriously challenge the current model in which on the basis of input of data and having something operating on that data, the result is taken as 'gospel', as Quillinan formulates it.

Current computers and systems do not have the ability to negotiate with a human being about what is a good idea or not. To program such negotiation with a human is very difficult because all people mean different things when they say something. You have to be able to negotiate what you mean, the terms of reference with the computer. And then it has to have some sort of training so it understands what you mean by things, and Quillinan illustrates, "You can't say well, I want this to be sparkly instead of a single colour". Today computers are good at competitive tasks without boredom, which humans are really bad at. And they will do the same thing hundreds of millions times if you ask them to without complaining. Humans are good at higher level thought; solving a problem in an a-symmetric manner. Computers can only do what you tell them to do. Quillinan agrees that they have become masters to a certain degree, but the flipside is that they are also our slaves. They do the really ugly tasks that we don't want to do.

The way things have changed over time is dramatic. In the way things are computed we follow a model of things just happening in the order we specify and that's how most computers run. But there are alternatives, things like, instead of making things happen in order, we have some sort of logic based thinking where you backtrack trough the information you have and then you figure out what happens next. That's an alternative way of computing. And while such computers exist they're not very popular. The physical production of computers, it's all emulated on sequential and parallel manner and that's in some ways more logical, because what you're doing in such a logic based approach is that you're forcing the computer to behave the way the human wants. Still people are very active in researching other models of designing systems, but overwhelmingly people have chosen to follow the imperative model and do things in that way.

The reason that we use the imperative model is a result of being mostly funded by the military but also has to do with how cheap it was to produce components. And those components went out and therefore, in fact it is - I wouldn't say Bill Gates' fault - but it's the success of companies like Microsoft and IBM in producing cheap computers, this model is there. It is much more expensive to produce other sorts of computers now, Quillinan explains.

Asked to comment on the rhythms, frames, loops and organisation of time in machines and between people in communities, Quillinan emphasizes that scale is the most important difference between those two. Computers operate on very, very, very, very short periods of time, microseconds, nanoseconds. Humans operate on a much longer scale, don't really notice where an hour goes, don't remember every hour and every day. People only remember the parts that were important to them, they take time as being very small memories about a specific time, but can hardly remember what they did this morning. Computers in contrast remember everything they are told to remember, but they also operate on things in byte size chunks and never very long. Computers are programmed to have rhythms; they do certain operations happen every minute, every hour, every day. And they have the clock which cycle defines exactly how often things happen. Hertz defines a time cycle, so Kilo Hertz is a thousand cycles a second.

Compared to humans computers do not create knowledge, they compute very fast while dealing with knowledge. And they do this faster and faster and every new version replaces earlier ones. This 'lack of respect for ancestors' in computing is not a problem for Quillinan: "Do we really want to go back to doing things the way they were done at first?" he asks.

When asked about the interaction of rhythms between humans and systems Quillinan emphasizes that humans operate on a completely different rhythm to computers. Human beings are very irregular, even though they have certain regular intervals as well. If these rhythms meet either human beings are forced to warp to computer rhythm or computers have to become more intelligent and this would involve that they would be capable to find out how the human being is feeling. Nevejan finds it interesting he uses 'feeling' as factor of distinction for intelligence, because in presence the feeling is very important because it helps you stay away from pain towards homeostasis; it helps you move towards well being. The sense of presence is the sense of survival in which feelings and emotions and literal sensations are key informants.

One of the major advantages computers give us is the lack of having to be in one place. And especially with the access to the Internet, one can be pretty much anywhere in the world and still interact by way of computer. It is one of the joys of computers to have this lack of place, which defines what you will do based on where you are, Quillinan finds. Normally when you are interacting with computers you are using a keyboard and you are looking at a screen and this configuration defines what sort of inputs and outputs you can receive. People, who are designing these systems, utilize this habitual knowledge to design place in computers and how computers talk to you. But in a kind of a wider aspect, how one feels when using computers, depends on the place you are physically in. If you're sitting at a beach typing on your laptop Quillinan imagines "people to feel much happier because they are in a happier place. Where as if they are in a dungeon, with water dripping down their neck, it is a little bit less".

Because of their scale and speed, computers and systems have changed important concepts of space, of speed, of connection, of impact. Computers have changed people's perception of the world and the effect of their actions is really profound. Quillinan agrees, but also emphasizes that at core he is an engineer and to him computers and systems are tools and the creativity behind computers he understands as being human-inspired. The tools themselves he never considered to be beautiful or creative at all. Nevejan argues that the input of data and the processes they do, seem to be of a different category than the impact their outcomes have. Quillinan agrees that computers do change people, but interacting with your computer often times means interacting with other people. Especially in social networking interaction with computers is actually influenced by interaction with other people. In that regard they're more than just tools; they do have some impact in your life. If it were just a dumb terminal you were entering data into to do something and return result, nothing really changes, one just uses it to do something very quickly. Quillinan supposes that the difference between what he means by tool and what Nevejan means by computer, is that Nevejan focuses on how computers change human beings, where Quillinan sees a computer as a facility, which allows human beings to change by interaction with other people.

In relations the witnessing and the recognizing of the other is very important. Computers facilitate a lot of relations, argues Quillinan, but it depends on what you're using your computer for. There are two different ways of having relations when using the computer: with people one does know and anonymously, with people one does not know. If you are using it on a professional basis, people know who you are. You tend to format your communication in that sort of manner, you tend to be a bit more formal, and you tend to be a bit more restrictive in what you say. Where as if you are just commenting anonymously on the Internet, you can pretty much say what you want, whether it's nasty, or obnoxious or not. Anonymity gives you the ability to say things for example in a political sense, says Quillinan, that he would never say if someone knew who he was. Quillinan does not like it that strangers know his views in certain things and in an anonymous fashion you can have an influence on what other people think, which he personally finds very interesting. In computer networks, when people don't have to interact ever again with that person, they are truly cruel, and they are really nasty. Where as when they are in person they tend to tone down their real feelings. There is not as much violent hatred in real life as on the Internet for example. How communication between people is formatted not only depends on computers and systems like the Internet, but also depends very much on how people relate to each other and how you communicate with other people through computers is very different from how you communicate in person.

Quillinan agrees that one can describe computers and systems as formatting tools of human presence. The interaction, the formatting, it tends to be depended on the application, he argues. If you're just trying to communicate in one way, versus two way communication, then a big screen with data in a very clear organized fashion is fine. Where as you want to have interaction you have to have a different sort of set-up. Other aspects of formatting, according to Quillinan, are scale, impact and how many interactions one actually wants to have. On the one hand a person's impact on the world is larger on the Internet, you have much more ability to reach people in many more places. But it is also smaller, because it's one voice among billions rather than thousands in a big speech, for example.

When designing interaction between human beings and computers, certain values are important. To Quillinan pleasing the eye is a very important value: in the user interface, the ability to look at something and feel comfortable with the colours, the styles, is very important. The original computers interacted by flashing lights at you and now they allow you to feel more comfortable with what's going on. They do this by emulating something physical, representing physical things like folders and a desktop to make interaction more smooth. Other values like transparency and identity for example, computers do not have. As stated before, for Quillinan it are tools that facilitate communication and relations.

Quillinan is working on a use case of crisis-management for a European funded project. They are modelling crisis as it happens. For example, if an area of the Netherlands is flooded, certain things have to happen towards resolving this crisis and this involves interaction between people and computers in fact. They are developing a model of crisis-management and using that to simulate different disasters to see how the overall plans that are in place, can be adapted to reflect these. Whether these plans are good enough, or whether they need to be modified to reflect this sort of crisis. The most basic task of crisis management is to save lives. You don't care about anything else until all the lives have been saved or as any as possible. Once the lives are saved, you want to prevent any more property being damaged. And after that, it is to return things to normal.

Witnessing is one of the issues in crisis management. In a crisis different people witness the same thing in very different ways, and one person will say, 'there is a major flood in my street', while another persons says, 'oh it's not that bad, it is puddle, it will go down'. So both people call the emergency services and who do you believe? And where is that input taken and how is it dealt with? This is a huge problem. One solution is to base this on reputation: if a policeman calls, and says there's a serious flood in the neighbourhood, it tends to be taken more seriously than when a random citizen calls saying 'there is a huge flood in my backyard'.

Perception of the outcome of a disaster is a second issue in crisis management. Quillinan elaborates on several examples he knows. For example schools officially are not evacuated by the state anymore. So if your children are at school you're expected to go and get them. But people have very strong perceptions of children. If you let a school flood and all the children die, that would be very bad. And you can imagine the government failing and falling because of that. Ethical positions in crisis management seem to be mostly defined by political perception, Quillinan finds. Even though the ethical position might be to save as many people as possible, it's actually to save as many of the right people as possible. But defining who is worth saving or not is a very dangerous thing to do.

In the simulation models Quillinan is working on official lines are the starting point of design. Both political realities as well as personal complexities, like the hysteria of the women who thought her backyard was flooding, are not taken into account. Computers do things very fast, but they're not fast enough 'to simulate the entire world in high-speed', as Quillinan formulates it. They have to work with subsets: they are modelling the overall organisational model, looking at the crisis-coordinators of the Netherlands and how they direct orders to the organisations beneath them, the police organisation or the fire organisation. The simulations want to influence overall policy changes rather than 'go to this street, put out the fire in that house, evacuate people first of course'

One of the biggest problems in most crisis has been lack of understanding, lack of knowledge of what's happened in the past. Poor decision making that's happening today, has happened before. The one thing that systems can contribute in the first place is knowledge, according to Quillinan. By analysing all the different scenarios that have happened, or trying to create those, it is possible to ask the system, 'this number of schools have been flooded, how many people do we need to save all those children?' and that's the sort of thing computers are very good at. The identity of systems in crisis management is to be knowledge repositories. They have a lot of knowledge about what has happened in the past and what is happening now and can correlate between the two of them.

The other task is of course to facilitate communication. The advantage of dealing with organisations such as the police department, fire department, is that they have a very rigid hierarchy. This means that you can direct order and you have people who have a strategic overview. You have an institution of knowledge of where the different units are located, where to you can direct your resources. Not only hierarchal information is in the focus of attention of Quillinan and colleagues, but also input from members of the public. This tends to be more chaotic and difficult to coordinate. When studying the post mortems of previous disasters, Quillinan finds that often what went wrong is that people didn't notice something that was important. So they didn't notice that this particular plane didn't have enough fuel in it because a meter was broken. The cause is never this huge problem, it always tends to be very small. Another issue is that in crisis people will tend to do what the computer tells them to do. They seem to have more trust in a computer when there is a crisis taking place; even more so than normal, thinking 'it knows something that I don't', as Quillinan states it.

Asked about witnessed presence in systems, middleware or applications, Quillinan finds that the difference between applications and infrastructures is dramatic. Even though he claims that computers are tools, at the application level they have to deal with humans. There is a certain compromise made to deal with humans. The easiest way to get a lot of information is to fill it out a text form, one line after another so the computer can deal with it efficiently. But that doesn't allow humans to use it. On the infrastructure level, computers talk to computers most of the time, so the communication tends to be much more efficient. But also in infrastructure there are institutional, human based constraints put up in computers, they can't just discover how they talk to each other. Computers witness other computers extremely primitive; they recognize that they're there or not there. A computer doesn't change because it's dealing with an HP computer, instead of a DELL computer. It doesn't make any difference to it; it doesn't have any feelings; computers don't have feelings; human beings have feelings.

Transcript

The following is an edited transcription of the conversation. Film fragments of the conversation are included to illustrate parts of the transcribed text.

[Sequence 1] [Sequence 2] [Sequence 3] [Sequence 4] [Sequence 5] [Sequence 6]

CN: So it is the 27th of April 2009, we are here at the Vrije University with Thomas Quillinan. You are from Ireland, and I am very happy that you want to talk to us. And help us explore these issues of witness presence and system engineering. Can you shortly describe what you do?

TQ: I am a researcher in the VU, I have a number of tasks, part of it is project related, I work in a number of different projects. Research projects with a lot of external partners. I also do teaching to a certain degree. I also have some supervision roles, so I have a number of students who do research for me or with me, so I supervise those, so yes and of course some administrative tasks are part of it as well.

CN: And your research is mostly focussed on security?

TQ: Security and distributed systems.

CN: So can I ask you just really broadly, if I say witness presence, what happens when one witnesses another? Is that a notion that resonates?

TQ: I don't understand what you mean.

[Top] [Transcript]

Sequence 1

CN: Well when I see you and you see me, I witness you, before we even interact, I witness you. I see you, I recognize you I don't recognize you. So this is between people, this also happens between people and systems, and between systems and systems. So I was wondering if you could comment on the act of witnessing.

TQ: Well I think first of all, there is a difference between looking and seeing, that a very important distinction, well, when you look at something, like in the real world you just see something completely as it is. It is not necessary that you are paying attention. And people with systems have a similar problem, that they look at a computer system, and often times they see what they want to see and they are looking at it but they are not really seeing. And I think that it is a particular problem for engineers and people who are designing systems, to understand what the difference is between what one person sees and another person sees. I don't know if that's what you are looking for.

CN: Well what is the difference between looking and seeing? What do you need to do?

TQ: Concentration I think, is the most important thing. When you are looking at something, often times you just take the barest details. I mean if you are walking in a crowd, you often just see the heights of the different crowd, other than focussing on each persons face and identifying 'Ok there is ten women and five men approaching me right now'. That's more seeing, I would feel that anyway. And there is a similar problem with computers, when people are interacting with computers, and I'd rather be blaming people, than blaming computers, but when people are looking at computers, they are only seeing the things they want to see. And often times, I think that human-computer interaction, human interface design is very important for this reason. You see things in a certain place, and often times it can be very frustrating trying to understand what an engineer meant when they designed something in a certain sequence.

CN: So that's interesting because you actually argue that when you witness a system, or computers, unlike when you witness a person, I mean, even if you look at a person, if I am in the street and I only see you, I look at you, but even by looking at you I already recognize a distinct set of features, which will make me run, or make me embrace you, or chat with you, or what ever.

TQ: In a crowd off course people tend to, I don't know how the brain works, but you do seem to pick out familiarities. So if a friend of yours is in a huge crowd, you will see him, even though you are just looking.

CN: So what is it that people don't see about computers that they should see?

TQ: I think it is less about seeing, and more about understanding. Computers tend to be, well, I suppose there is an expression in the open-source world, which is 'developers scratch an itch'. So if something is not working for a developer, they go and create a programme that scratches that itch, that solves that problem. And a lot of times in computer systems, when people are building computer systems, they are building it in the way they think it should be done, and even if it is a large organisation, and there's defined guidelines for doing things, there is often times an institutional memory about how things should be done and that not normally intuitive for somebody who's coming at this for the first time. So instead of having the computer to adapt to the person, the person is supposed to adapt for the computer. And you can see that quite clearly with different operating systems for example. But If you move from one operating system from a Mac to a PC, you have to change the way you do things rather than the computer is evolved to change for you.

CN: And you think the computer can do this?

TQ: In an ideal world of course it should, but current technology?

CN: I'm just thinking, because you say there is an institutional memory, so this is like a human behaviour, so the history of human-machine interaction, and yet there is another intuition of people how they want to use the tool or the system or the intelligence behind it.

TQ: Well, lack of intelligence. But yes, there is a difference, but one of the problems is that people have a way of communicating with a computer and the computer has only one way of communicating with the user. And it depends what sort of interfaces you have to a computer before you will be able to change things so the computer is more accepting.

CN: There is this quote of Thomas Kuhn, who says that people, but also other beings can only interact when you recognize each others spacio-temporal trajectories.

TQ: Yes I know what you mean, but that's obviously a problem with computers. Instead of both adapting as in two people communicating, even if they don't speak the same language, there are certain universally accepted truths about communication. If I try pointing at something, you'll know that I mean, 'well what is that', or 'can I have that', regardless of the language that they speak. Computers don't have that ability; they don't have the ability to adapt to any user at all. You can start typing random words and it will not make any difference.

CN: So how do you think people tune the presence to the systems?

TQ: Well I suppose it is training. You have to train yourself into communicating in a way that the computer wants you to communicate. You are forced to change. That's the only way it can be done.

CN: But computers have become, or systems, very powerful in communities of people.

TQ: Yes, absolutely

CN: And they have executing power to a very large extent. So you could argue they have become participants but actually you argue now they have become kings.

TQ: Well to a certain degree, yes, but I think it is worse than that. Because, well as I said my background is in security, and often times people will blindly do, what the computer tells them to do. And regardless whether it makes sense or not, they still follow it. A class example, is in recent weeks I have read a number of articles, news articles, were people have blindly followed their gps units to narrow footpaths because the gps unit said to go there, so I follow it because it tells me where to go. And they seem to suspend their own judgement, because the computer is telling them something different. And yes, I think computers are kings, and in certain areas you do what the computer tells you to do. I suppose it is dangerous that computers become too intelligent for that reason.

CN: Well but this is exactly the heart of the matter. And you're also confident that computers and their systems cannot, well, can they have sort of a position in their own right?

TQ: I don't understand what you mean.

[Top] [Transcript]

Sequence 2

CN: The way you describe it now is as that computers have become kings because people behave like slaves.

TQ: That is an interesting way of looking at it.

CN: Something in the computer, possibly its spacio-temporal trajectory, makes people behave like slaves because they do not anticipate, they do not recognize, so you have to adapt and since you adapt anyway, you can adapt all the way. So can you imagine something you can do in the way systems are designed, without having the solution, to stop this adaptation, would that be possible?

TQ: It's very difficult, I'll tell you why. Computers have been designed to be like this from the very beginning. I mean, back in the earliest days of computers, they talked about masters and slaves. They, like Alan Turing one of the founders of computing, had talked about - you need to remember that computers were very primitive back then - but he considered himself and two other chosen people to be the masters, and the slaves were the people running downstairs replacing components that had broken. But from the very beginning that's exactly how computers have been designed, they have been designed to slowly take over the position of the masters and with higher and higher levels of thought needed to interact with them. And then the slaves are just in this case also taken-over by the computers. To answer your question, can it be changed? Yes, it can be changed, but how is very difficult. It would require a complete change in the mindset of how we use computers. Asking them for advise maybe, and then making your decision. Rather than inputting data, having something operating on that data and then take the result as gospel.

CN: Well yes, very nice, that's a very nice first step.

TQ: It's the advice, I think, the computer has to advise rather than order you, and that is not how they work. They don't have the ability to negotiate with you what is a good idea or not.

CN: And why do they not have that?

TQ: The simple reason is that it is very difficult to programme negotiation with a human because we all mean different things when we say something. Every person has a different take on what that means, I think that's exactly what you are doing, you are getting different takes. And I think the computer is not intelligent, and therefore it's very difficult for it to adapt to human.

CN: But systems can be intelligent.

TQ: Well it depends what you mean by intelligence. What is intelligence? There are many standard definitions for intelligence: it reproduces, and its self-preservation. Those for example computers don't do. You can argue, but at the moment they don't. I don't think computers are alive, and therefore they can't be intelligent, not in the traditional sense.

CN: I had a very nice conversation with Sunill Abraham who is an Open Source man from Bangalore and participates in Open Source Asia. And he says, systems are becoming authentic in the sense of being alive because they are acquiring the possibility to self-destruct.

TQ: Well self-destruct, to me entails intelligence though. In order to be able to decide to no longer to be there, it requires intelligence. I don't see that. I see computers are very interesting toys and tools, they an awful lot of useful things, but I don't particular see them as intelligent.

CN: Let's talk scale. The big difference between us and the systems is the scale.

TQ: Yes

CN: The scale in time, the scale in place. Because they do maybe some of the things we can do as well, but they do it at such different scale that we cannot perceive any more what they do. Which makes it easy to think of them as kings.

TQ: I suppose, but I think it comes back to what's a computer good at, and what it is bad at. Computers are good at repetitive tasks without boredom. Which humans are really bad at. And they will do the same thing hundreds of millions times if you ask them to without complaining. Whereas humans are good at higher level thought; solving a problem in an asymmetric manner. Computers can only do what you tell them to do. I agree that they are to a certain degree kings, but the flipside of that they are also our slaves. They do the really ugly tasks that we don't want to do.

CN: So what would be necessary to get them in a role of adviser instead of decision maker?

TQ: That is a tough question, intelligence off course but that term is over used by now. I think negotiation is very important in this regard, that you have to be able to negotiate what you mean, the terms of reference with the computer. And then it has to have some sort of training so it understands what you mean by things. It is difficult to imagine how it could be done.

CN: Well at least there should be always more than one option. For example one answer is never allowed. Or does it not make sense?

TQ: Well, I suppose to some degree it does. Like for example, what colour would you'd like your screen to be. And I suppose if you give it multiple choices, it is allowing some sort of negotiation. But it still doesn't really define interaction, you're still confined by the choices the computer gives you. You can't say well, I want this to be sparkly instead of a single colour.

CN: Let's focus on the four dimensions that I defined for the relation between presence and trust: time, place, action and relation. So can you comment on the sort of time rhythms, frames, loops, organisation of time in machines and systems and organisation of time between people and communities.

TQ: Well I think scale is the most important difference between those two. Computers operate on very, very, very, very short periods of time, microseconds, nanoseconds. Humans operate on a much longer scale, I mean we think, you don't really notice where an hour goes, you don't spot every hour in every day, you don't remember every hour and every day. And you take time as being very small memories about a specific time, I can hardly remember what I did this morning, but the parts that were important you hold on to. Computers in contrast, they remember everything they are told to remember, but they also operate on things in byte size chunks and never very long.

CN: So actually their range of execution time is very small while their memory is very large.

TQ: Exactly.

CN: Very strange. And is there like a cycle or a rhythm in the systems?

TQ: There is to a degree. We programme them in such a way that they have rhythms, and certain operations happen every minute, every hour, every day, but I suppose it is very artificial. It is human drives, those cycles

CN: But at the heart there is still the current that goes through their bits

TQ: Yes that is true off course

CN: So actually that is a rhythm

TQ: I suppose that is true, and you have a clock cycle, and a clock cycle defines exactly how often things happen and in the case of modern computers very fast.

CN: The clock cycle?

TQ: So every computer, every electronic device has a clock. And the clock defines, if something is - well your computer is 10 Gigahertz computer, and Hertz defines a time cycle, so [Gigahertz] Kilohertz is a thousand cycles a second. It may be more than that, now I get in trouble ... but you know what I mean, Megahertz, Gigahertz ...

CN: So if we stick on this on community of systems and people could you imagine that you could sort of transpose the material rhythm of computers and then there's the way we programme this material rhythms of computer before it interacts with our biological rhythm. Is there any synchronicity possible?

TQ: I don't know, I can't see how it could be. Humans operate on a completely different rhythm to computers. We are very irregular, I know we have certain regular intervals as well..

CN: And what do you think will happen if these two meet?

TQ: Well either we are forced to warp to computers rhythm or computers have to become more intelligent again, they have to figure out how you're feeling. And know whether you have you had your coffee. Or we then have to confine ourselves to wait for the right moment every day.

CN: So this feeling is very interesting, because in presence the feeling is very important because it helps you stay away from pain, towards homeostasis, it helps you move towards well being. So actually the sense of presence is the sense of survival. In which feelings and emotions and literal sensations are key informants. The thing is, with animals you can think 'do they have feelings' or with trees you can think 'is there soul in there'. But even computers are made from natural material off course. And also if so many people put something in it, it is like cooking, if so many people throw vegetables in the stone soup, you get a really good soup, that becomes nourishing.

TQ: So it is all stone soup in the end?

CN: No it really becomes a good soup.

TQ: I know the story you're referring to, it is just that I suppose the moral of the story is the sum of the parts is more than what someone defines it as.

CN: But is that a mechanic that will work? When you work in distributed systems.

TQ: Well all distributed systems does is that it allows you to do tasks much quicker, that's the aim of it, that you can do more complex tasks across a number of computers. It is not that it does anything more special. Ther is no ...

CN: I would argue for example, I can say human being is 70% water. I can also say, you are a mind.

TQ: Well, you are just a mind contained within a body, is it? ..:)..

CN: Yes, a mind within a body, but imagine you can say 'because you have' well I would say, for example, distributed systems make new ways of knowledge production possible.

TQ: I would say, less production, more computation. That is not the word I am looking for, more dealing with knowledge rather than creating new knowledge. I think, and this is your bias, it is just a means to do something more quickly, that's all it is, it doesn't really do anything special. It is just faster. It is just like getting a new computer. There is a thing called the top 500, super computer rating, the top 500 super computers in the world. And I forgot exactly the statistics, but I remember watching a talk by the guy who runs it and he said that his laptop he was presenting from would have been top of it, of the ranking, five or six years ago, and these are the most impressive computers in the world and yet they are being overrun by events, so that's how I look at distributed computers. Its just the advantage of the future today, if you know what I mean by that.

CN: So we are overrun by ancestors all of the time.

TQ: Off course, and with computers it's very fast. Because that is what they do, they do things very quickly. They get faster.

CN: Is there a problem with the sort of no respect for your grandparents of computers?

TQ: Not really, do we really want to go back to doing things the way they were done at first.

CN: No but may be interesting is that some roads have been closed of on the way, and they maybe need to be revisited sometimes.

TQ: Well the way things have changed over time, is dramatic. Just in the way things are computed. We follow a model, of things just happening in the order we specify, and that's how most computers run. But there is alternatives, things like, instead of making things happen in order, we have some sort of logic based think, where you backtrack through the information you have and then you figure out what happens next. That's an alternative way of computing. And while such computers exist they're not, A: not very popular, and not much, well, researchers have gone into the, in recent years and have gone into the physical production of computers, it's all emulated on sequential and parallel manner and that's in some ways more logical, because what you're doing in such a logic based approach is that you're forcing the computer to behave the way the human wants rather than the human to behave the way the computer wants. And there are number of different ways for doing this, but that's one of the avenues that has been, - well, I would get in trouble with the wider scientific community if I said it would have been closed off because still people a very active in researching this area - but overwhelmingly people have chosen to go the imperative model and do things in that way.

CN: But also a lot of research was funded by the military

TQ: Yes absolutely, that's true, but the reason that we use the imperative model has more got to do with how cheap it was to produce components. And those components went out and therefore, in fact it is - I wouldn't say Bill Gates' fault - but it's the success of companies like Microsoft and IBM in producing cheap computers, this model is there. It is much more expensive to produce other sorts of computers now.

CN: Yeah, I've thought about this sequential thing yes. So let's talk place. What is place to you, because I mean, it's systems, but it is also distributed. So how about place?

TQ: Well that's actually a very interesting thing, because one of the major advantages computers give us is the lack of having to be in one place. And especially with the access to the internet. I can be pretty much anywhere in the world and still interact with my computer. In fact, I regularly use my computer that's at home, when I'm at work, because it does certain things faster for me, and I have information there. So it's one of the joys of computers actually, is that you do have this lack of place, lack of defined what you're doing based on where you are.

[Top] [Transcript]

Sequence 3

CN: And do you feel that, well you have space and place, and in anthropology the difference between space and place is that place is space with culture. So if place is embedded culture in the environment. Can you elaborate both on the place-ness, on the culture of place-ness on computers and systems, as well on the engineers. So what are the values in this culture? There are people who design these systems, and certain values in these systems.

TQ: That's an interesting question as well. I suppose the most basic answer is the place that you are, normally when you are interacting with computers is you're using a keyboard, and you are looking at a screen and that defines what sort of inputs and outputs you can receive. I suppose, engineers, people designing these systems, utilize that knowledge, that that is how you are interacting. And that's how your place looks like, to talk to you, computers talk to you. But in a kind of a wider aspect, I suppose place, how you use computers, how you feel when you are using computers, depends on the place you're in, the space you're in. If you're sitting at a beach typing on your laptop, I can imagine people - and I've never done this - but I can imagine people feeling much happier because they are in a happier place. Whereas if they are in a dungeon, with water dripping down their neck it is a little bit less.

CN: But there is also, the moment you go into the screen, there is a world there. Which is designed, and this world sort of reflects on, well, there is an idea about efficiency. Can you formulate what sort of values?

TQ: I think first of all the computer is emulating something physical.

CN: Emulating meaning?

TQ: Emulating meaning, it is trying to represent something physical. So if you look at what we call certain aspects of your interaction with the computer, it is folders, which means it is supposed to give you the idea that you put things in folders and file cabinets, and stack them in shelves, but most of the time you are looking at your desktop which is meant to mean your table, and that things are arranged in a certain places that you can go and pick them out. And that's one aspect of it that you do have this interaction with what is supposed to familiar, what is supposed to be comfortable. To answer an earlier question, how computers can change to affect users, that's one of the area's where they have changed. The original computers interacted by flashing lights at you and now it is much more allowing you to feel more comfortable with what's going on.

CN: That's definitely a value. There is more maybe. Like if you think of designing systems, what are things you find important?

TQ: Well, there is I think pleasing to the eye is a very important value. But things have to be both easy to look at, so that you're not uncomfortable with it. I think design is very important when it comes to this. Originally when the Internet started off, one of the ways to attract attention was to blink text at people and it's an absolutely horrible way of interacting with something when it is completely flashing at you, and people of course didn't like it. That's why I think design of the user interface, the ability to look at something and feel comfortable with the colours, the styles, is very important.

CN: So there is some kind of physicality or representation of physical aesthetics.

TQ: Absolutely

CN: Is there something like freedom, or responsibility?

TQ: Freedom and responsibility.

CN: Or is there transparency or identity?

TQ: Well, how you interact with a computer - I don't really see that, I don't really see computers that way. I don't really see them as having any independence or identity at all. I see them as being tools.

CN: So let's focus on the action thing then, on what tools are supposed to do. So how would you describe what a computer does, in other terms? Let's see, what terms. I look for another language because you say 'oh you know, it is just a tool and it computes and it's just fast', but actually my whole concept of space has changed, my whole concept of speed has changed, my concept of connection, impact. So this has effected my perception of the world very deeply. So it cannot only be just a tool that computes because actually the effect of its actions is really profound.

TQ: Absolutely.

CN: Yes. So we need to find other words to talk about the actions.

TQ: Well, I don't really know how to answer that. I suppose, at core I'm an engineer and a lot of my beliefs come from my engineering background, I look at things as just being tools, and I look at the creativity behind computers as being human-inspired, human-created, rather than computer-created. And I suppose I just look at them as tools. I don't look at them as beautiful or as creative at all. I don't think I ever have.

CN: But if we think about the nature of action, I don't know which words to use with this, I mean, if I pick up a cup it's action or maybe that's just a gesture. But if would open the door, it is really an action, it's a fact, material change. In computers words have become deeds, and because their way they deal with data they transform. I mean like the process they do and the outcome they give, or the effect of the outcome they give, does not seem like from the same category.

TQ: I don't understand what you mean by that.

[Top] [Transcript]

Sequence 4

CN: Well, if I talk about a train, there is a very beautiful book by Virilio, about a train, horizon of speed, because a train is just a thing that moves. But a train has become so many other things, which you can also see in marketing of trains, a train is also joy, a train changes the sense of distance. It changes the sense of how I move, I am not waiting in a train, I am on my way but I am still in the train, and having free time. So it looks like a transportation thing changed but actually it changed mobility and location. So I can say the train is just tracks and a people mover, I can also say it's a way of transportation, transposing concepts, I don't know, I am just looking to understand whether we should look in that direction.

TQ: Well computers do change people, interacting with your computer often times mean interacting with other people. Especially nowadays that you have the insights, these concepts of social networking, your interaction with computers is actually influenced by your interaction with other people. In that regard they're more than just tools but they're tools to allow you to do this. They do have some impact in your life. Where as if it were just a dumb terminal you were just entering data, it does something, it returns results. Nothing really changes for you, you just using it to do something very quickly. So I suppose the difference between what I mean by tool and what you mean by computer, my understanding is that you see computers changing you. I don't, I see a computer as a facility to allow you to change by interaction with other people.

CN: Maybe I should say it is a formatter of communication.

TQ: That's true, I agree with that. That it allows you to communicate.

CN: Yes, but also format, what I can and cannot communicate.

TQ: Yes, but the formatting is very depended on what the people

CN: Yes, yes.

TQ: But yes, OK, I accept that.

CN: Maybe that's the reason to understand it, because you know, body is formats, what we can do. But now the systems also format us.

TQ: Oh, absolutely. And how you communicate with other people through computers is very different from how you communicate in person. And you have to look at just how much, the long word is nastiness, but that's what I mean, and there is on computer networks, when people are talking to each other in the faces, when they don't have to interact ever again with that person, they are truly cruel, and they are really nasty, where as when they are in person you tend to tone down your real feelings. There is not as much violent hatred.

CN: So how would the formatting, let's say it is a formatting tool. A formatting tool of human presence.

TQ: Yes absolutely, that's a good way of putting it.

CN: So it's a formatting tool, so that's in a way if you'd format presence, you're pretty high up on the rank of tools. What would that act consist of?

TQ: The formatting of presence?

CN: So it squeezes it into you know, there is a master-slave model, there's an efficiency model, there's the, it tries to look at these kinds, sort of minimal adaptation to human presence.

TQ: Or none

CN: Also designers talk about the loss of physical reality in the things you design

TQ: I agree, like the features of the face

CN: Well there are some basic human physical laws that have to be followed. Anyway, let's take it from there, the formatting. Is that in the distribution? How is it distributed? .. Doe it influences formatting, you think?

TQ: Well, the scale off course, of how many interactions you want to have at any one time. That's one aspect of the formatting.

CN: Till we become larger.

TQ: To a degree, yes. Our impact on the world is larger. And smaller, because it's many more voices, but you have much more ability to reach people in many more places. But still, it's one voice among billions, rather than thousands in a big speech for example. But, I'm just trying to think about how you format things. The interaction, the formatting, it tends to be depended on the application, what you're trying to do. So if you're just trying to communicate in one way, versus two way communication, then it is a big screen with data in a very clear organized fashion, it is fine. Where as you want to have interaction you have to have a different sort of set-up. Different way of doing that, if that makes any sense.

CN: It makes sense, yes.

CN: I'll think about it more. Let's go to relation. In relations the witnessing and the recognizing of the other is very important. Computers facilitate a lot of relations. So this whole tuning of presence, in formatted presences, how would it happen? How do you do it? How do you tune your presence?

TQ: It depends on who you're talking to, it depends on what you're using your computer for. If you are using it on a professional basis, where you're looking at, well people know who you are, so there is an anonymity problem here, if your presence is there, they know who you are. You tend to format your communication in that sort of manner, you tend to be a bit more formal, you tend to be a bit more restrictive in what you say. Where as if you are just commenting anonymously on the internet, you can pretty much say what you want, whether it's nasty, or obnoxious or not.

CN: You also do that, you enjoy that.

TQ: Well, personally not so much. I like the anonymity because it gives you the ability to say things for example in a political sense that I would never say if someone knew who I was, so I don't like people knowing what I think. I don't like people knowing what my views are in certain things. I don't mind if the person knows me, but I'm not going to announce the world my position on this something. And in an anonymous fashion you can't have an influence on what other people think which I personally find very interesting. So I suppose that's the two different ways of having relations...

CN: Let's to the crisis.

TQ: The crisis?

CN: You're in this project of crisis, so we are now in the crisis moment. Maybe you can shortly tell what the project is.

TQ: Yes. So this project is a European funded project, and my part of the aspect I work on is a use case, so we're looking at taking some work that some other partners are doing, and applying it to a very specific use case. And this use case is crisis management. So what we're looking at doing is modelling crisis as it happens, for example, if an area of the Netherlands is flooded, certain things have to happen towards resolving this crisis, this disaster, and this involves interaction between people and computers in fact. The reason we are doing this is, you cannot prepare for every single thing that will ever happen, so you can prepare for flooding, but you can't prepare for a hundred schools to be flooded simultaneously. That's a much more difficult prospect, because you have certain concerns. So what we are looking at doing is developing a model of crisis management and using that to simulate different disasters. And see how plans, the overall plans that are in place, can be adapted to reflect these, whether these plans are good enough, or whether they need to be modified to reflect this sort of crisis.

[Top] [Transcript]

Sequence 5

CN: So what is the crisis management?

TQ: Well, that's a very good question, what is crisis management? The most basic one is to save lives. That's the most basic answer to that question. You don't care about anything else until all the lives have been saved or as any as possible. The second priority, there's a number of priorities, is to save property. At once the lives are saved, you want to prevent any more property being damaged. And after that it is to return things to normal.

CN: So if you model crisis management, crisis means sudden, there is no time.

TQ: Well no not always. So for example, if you know a storm is coming you can prepare in a certain way, you know something is going to happen. But most of the time, yes I'd agree.

CN: So there is limited amount of time.

TQ: Yes. Sometimes no time as you said. If a plane crashes it's not expected.

CN: No there is no time, but limited amount of time, and I assume, I'm not sure, that in a moment of crisis, perception, noticing, witnessing becomes very important.

TQ: Absolutely.

CN: So how do you do this?

TQ: That's a very big problem, for an example, different people witness the same thing in very different ways, and one person, well, 'there is a major flood in my street', is another persons 'oh it's not that bad, it is alright, it will go down'. So both people call the emergency services and one says 'there is an absolute deluge of water, it's flooding' and another person says 'there is a moderate amount of water'. Who do you believe? And where is that input taken, how is it dealt with. It's a huge problem, it's based on the reputation of the person a lot of times. If a policeman calls, and says there's a serious flood in the neighbourhood, it tends to be taken more seriously than when a random citizen calls saying 'there is a huge flood in my backyard' and I suppose that's the difference.

CN: So that's one, is there more?

TQ: Of witnessing in crisis management, that's the biggest one, absolutely. Perception is very important as well in the outcome of a disaster. Perceiving good results, of a bad thing, often times mean doing everything that you can even if though it is a waste of time. I can give a very specific example, in the Netherlands it's a lot of countries beneath sea level, and flooding is a constant thought on peoples minds. But people live in very risky areas, on dikes, higher areas, that if there was a flood, would be cut of. And while the official line is, if you're living in a very risky area and you do not evacuate, nothing will be done to save you. That's what the official statement is, 'we will not save you if you're sitting on the outer islands'. However, if people are let die with no attempt to do anything for them, it's considered a very bad outcome. So I cannot imagine that official line will actually be held in these sorts of things. Another example is schools; schools officially are not evacuated by the state anymore. So if your children are at school you're expected to go and get them. And people have very strong perceptions of children. If there is a child in danger, you would expect the child to be saved first. That would be the natural human reaction to do that. Where as, well, officially they say they will not do that, but that would be a very bad outcome of the perception. If you let a school flood and all the children die, that would be very bad. And you can imagine the government failing and falling because of that and that is a very important part of it.

CN: So there ethical positions.

TQ: Yes, absolutely, and even though the ethical position might be save as many people as possible, it's actually save as many of the right people as possible. But defining who is worth saving or not is a very dangerous thing to do.

CN: And you don't have to do that in the models?

TQ: No, we take the official line, so we are modelling the existing, and off course because we're doing that, it doesn't actually reflect reality perfectly.

CN: So let's back to the witnessing yes? If I witness something, there is interaction between me and what I see. So I can only witness something in relation to myself. So that's also a big part of the definition of being witnessed, it's from being here that I witness. What I see, will also change me. So the woman who says there is a huge flood will turn hysterical, the guy who says it's nothing will, you know, will have his house completely flooded because he didn't close his door or whatever. So how do you simulate this? This interaction between, because this must be going so fast.

TQ: It's impossible to simulate to that sort of complexity. We don't have, well, so I say the computers do things very fast, they're not fast enough to do that, because what you're trying to do is simulate the entire world in high-speed, which is impossible. There is always changes, so a butterfly beats his wings in Hong-Kong effects it over here. We cannot simulate everything so you have to take a subset.

CN: What is a subset?

TQ: So for example, technically we should model every single policeman, every single fireman, every single ambulance driver and all their equipment as well in a system. But to do that would require knowing where every single one of them is, at every single moment, also a huge amount of computation part, just to simulate each one and simulate their reactions. And each one is different, every policeman handles a crisis in a different manner. They have different effects on what is going on, so one person has a huge amount of experience, and she will then make a better decision on the street. It is impossible for us to..

CN: So how do you make the line?

TQ: What we are doing is, as I said, make a subset. We're actually modelling the nitty gritty details, the small details of what is happening. We're modelling the overall organisational model. So we look at it a much higher level, we look at it at the central managers of the Netherlands, the, it's called the LOCC, they are the crisis-coordinators of the Netherlands. And we are trying to model how they direct orders to the organisations beneath them, the police organisation or the fire organisation. So making overall policy changes rather than 'go to this street, put out the fire in that house, and evacuate people first of course'

[Top] [Transcript]

Sequence 6

CN: But it's interesting, because if you want to manage crisis there is a really nice story from Blair about Belfast, where he said 'you can only manage a crisis like this by knowing every little gritty detail' he said 'I knew every roadblock, and every roadblock I had a specific solution'

TQ: Yes, and that's absolutely true. Especially when you're dealing with something as difficult as, I try to put this politically, as an armed resistance, whether perceived or not, it is very difficult. One of the biggest problems in most conflicts has been lack of understanding, lack of what's happened in the past and I suppose one of the most famous sayings about this is 'those who don't know history are doomed to repeat it' and if you study history, you would see things that are happening today, poor decision making that's happening today has happened before.

CN: So what can systems contribute here?

TQ: The one thing it can contribute is knowledge, they can store vast amounts of knowledge and you can talk to your computer or you can extract from your computer. If you can formulate the question correctly, 'what should I do in this regard, because these things are happening' and that's exactly what we are trying to do in crisis-management systems. We're looking at trying to analyse all the different scenarios that have happened, or trying to create those. And when the problem occurs you can say, you know, 'this number of schools have been flooded, how many people do we need to save all those children?' and that's the sort of thing computers are very good at.

CN: So can you go trough one more abstraction level? If you say OK we have a community of people and systems, even though the systems, you know, are slaves and masters, they have huge impact. In moments of crisis, this is to be challenged, what is the identity of systems at that moment?

TQ: Knowledge repositories. A two-word answer.

CN: OK, knowledge repositories

TQ: So they have a lot of knowledge about what has happened in the past and what is happening now and can correlate between the two of them. So humans can say 'Well what do we need to do, what do we need to have' for example, a defined routine, a defined regard, so that we don't have traffic jams.

CN: And the other one is formatting presence. And how do you do that, because everyone wants to talk to everybody.

TQ: I suppose the advantage of dealing with organisations such as the police department, fire department, is they have a very writhed hierarchy, and that everybody's voice is not the same. So the commanders tend to have a, well, they can shout louder, they can shout people down I suppose. And that advantage means you can direct order and you have people who have a strategic overview. And then that's going on everywhere, as you said as Tony Blair said to know exactly where every roadblock is, you have an institution of knowledge of where the different units are located, where you can direct your resources from.

CN: Because I would imagine that in crisis, getting information bottom up is crucial for systems.

TQ: Oh absolutely, and that is one of the most important things is you have to have as much information as possible.

CN: So how do you orchestrate this?

TQ: Well again, it is hierarchical. OK it is hierarchical in the organisations we are looking at, but you also take input from members of the public. And that tends to be a bit more chaotic and difficult to coordinate. And I suppose, I had looked at disasters in the past, and there is always this post-mortem, where they're looking at what went wrong. And often times what went wrong is people didn't notice something that was important. So they didn't notice that this particular plane didn't have enough fuel in it because a meter was broken. And it is never, the cause is never, this huge problem, it always tends to be very small.

CN: So on the level of witnessing?

TQ: Absolutely, and you're saying if you'd just recognised that particular mistake, this would never have happened. But of course what people fail to understand is you would never have known there was going to be a huge problem if you had noticed the mistake in the first case. And yes, it is easy to criticize after the fact.

CN: But do you feel that people interact with systems different at a moment like that?

TQ: Yes, people have more trust in a computer, when there is a crisis taking place. They will tend to do what the computer tells them to do. Even more so than normal.

CN: Even more so than normal?

TQ: I think so, yes. My feeling, yes. But if your GPS is saying 'we take this road', and there's a problem to deal with, it is some sort of disaster, you really would take that road, like 'Ok it knows something that I don't know'

[Top] [Transcript]

CN: We have five more minutes, I didn't talk to you about the levels of witness presence in systems. Like maybe it is different on the level of infrastructure. Or on the level of middleware or applications, we mostly talked about applications. But this is also the five minutes where you can say if you ask these questions, this is important to talk about. Maybe you want to tell.

TQ: About what the witness presence in systems is? I don't know really know what to say, I think the difference between applications and infrastructures is dramatic because even though I would claim that computers are tools, I do believe that, at the application level they have to deal with humans. There is a certain compromise there made to deal with humans. I mean the easiest way to get a lot of information is to fill it out in text form one line after another so the computer can deal with it efficiently, but that doesn't allow humans to use it, well, humans don't know the order the computer needs it in, so it has to have name, age, address, in that order rather than just a blank sheet, fill in the information and deal with it. Where as in the infrastructure level, computers talk to computers most of the time, so the communication tends to be much more efficient.

CN: But the way these are designed also reflects ideas about organisation, delegation and mandate.

TQ: Absolutely, and particularly because a human at one stage or another has actually created the communication language. So there is institutional, human based constraints put up in computers, they can't just discover how they talk to each other.

CN: So how do they witness each other?

TQ: How do computers witness other computers? Extremely primitive I suppose. They recognize that they're there or not there.

CN: They don't resonate with others?

TQ: No a computer doesn't change because it's dealing with an HP computer, instead of a DELL computer. Doesn't make any difference to it. It doesn't have any feelings; computers don't have feelings.

CN: We have feelings.

TQ: We have feelings, yes.

CN: Is there anything you want to add?

TQ: I think it has been a very interesting conversation for me. It made me think of things in way I wouldn't normally think about them.

CN: It was very nice for me to talk to you.

TQ: Well thank you very much.

CN: Thank you very much.

This website is work in progress. You can use the content of this website under the Creative Commons license Attribution-NonCommercial-NoDerivs.