Self-organizing systems illustrated

This video shows a physical, self-organizing system.

These days a lot of people talk about such kinds of systems for information technology, called self-X systems. Self-X like self-healing, self-organizing, self-stabilizing. The self-X paradigm envisions software systems that come with some inherent dynamics to automatically keep a complex system in a “good” state while it is constantly under “bad” influences. A “bad” influence are request to a web server, the “good” state is that the server responds. State-of-the-art technology allows that a web server stays “good” as longs as there are not too many requests. In case of a (distributed) denial-of-service attack the server will no longer respond to each request; “bad” influence. A self-X web server would be able to deal with this by

  • distinguishing “bad” from “good” requests,
  • automatically providing additional capacity (in memory, CPU and network) for the “good” requests, and
  • efficiently denying all “bad” requests with minimal effort.

This sound too good, to be true. And its obviously not easy to achieve – otherwise we would already have such kinds of systems everywhere. The automatic software updates your operating system and web browser performs these days is one of the first industrial applications of this idea. The skype login mechanism is another example. I guess you want more of this. That’s why a lot of people are researching self-X systems, mostly by thinking of clever algorithms, architectures, network technologies etc.

But the development of these systems requires some profound understanding of self-organization. The good news is that many disciplines have some concept of self-organizing system. The bad news is that these concepts are different although they have something in common. Of course, chemistry, physics and informatics cannot have the same definition of self-organizing system – each talks about different things. But this does not help when one tries to build such a system.

As a starting point, though, I think the five self-organizing metronomes are very illustrative. Let’s have a look on them again:

Each metronome on its own is a single entity that can swing if it is pushed. The movement of one metronome’s pendulum is independent of the other metronomes. That’s why the five metronomes swing non-synchronized in the beginning of the clip. Think of yourself trying to synchronize all of them. By stopping and pushing each metronome one after the other. You will most likely not succeed.

What then happens in the clip is that the metronomes are coupled. This coupling is key in this self-organizing system. The coupling happens by putting all metronomes on a common board and placing this board on two rolls such that the board can move in the direction of the pendulum movement. The metronome was not changed, or more precisely, the mechanism of the pendulum was not changed. But by the laws of physics, the movement of the pendulum applies a force to the ground of the metronome, which is the board. So now, the metronomes are coupled – each applies “its” force to the board – the sum of these forces pushes the board, which moves, which pushes the metronomes. As a result each pendulum is “pushed” by the others. Instead of five independent metronomes we got five coupled metronomes.

Now by another law of physics, the entire system seeks an energetically optimal point. This optimal point lies in a synchronization of the moving parts such that the entire system looses as few energy as possible. If one pendulum is out-of-sync with the others, its movement will dampen the movement of the others and vice versa, and dampening means loss of energy. Actually, because of dampening, the system gets into a synchronized state at all. In the worst case, all metronomes have to stop to get the system synchronized. But luckily, each metronome can give away energy on its own (each metronome has its own dampening mechanism). Therefore, the system can reach a state where each pendulum has been dampened such that it is no longer dampened by the movement of the other metronomes.

I am no physicist, so I am not aware of all details of this process, nor can I explain them appropriately. But what I can say for sure is that the synchronization was achieved by two things:

  1. The coupling mechanism allows communication (i.e exchange of energy) between the parts that shall be synchronized.
  2. Each part is designed in a way that the part can be influenced via the coupling mechanism, most importantly by push and repulsion.

The difficulty in computer science is that we have to find such coupling and interaction mechanisms in a world where there are no laws like the laws of physics – well, not yet.

Advertisements

activity report: making the brand

I’m slightly late in fulfilling my promise of regularly telling something. At least, I have a good excuse. So here’s the line-up of stuff that’s been around the last three weeks.

  • I’ve found my PhD topic more or less. I’m working out some ideas and present them to my prospective PhD supervisors next week. So they can tell me that 90% of my ideas have already been tried and I’m left with a bunch of no-problems and unsolvable ones and that the rest is far too much for me to do in a single PhD thesis. We’ll see. But I’m quite sure that I’ll do something on Declarative Modelling and Verification of Workflows (for disaster management).
  • I’ve attended a soft-skills workshop on project management, leadership, and networking with my collegeagues from the Graduiertenkolleg. It’s been quite useful as we were working out a number of projects for the next phase of the research group, and it brought the team members closer. Many thanks to Golin Wissenschaftsmanagement for that one…
  • I’ve attended our Kolleg’s first workshop on “Meta-Modelling” which, as far as I can tell, is an approach to get control over the development of modeling and programming languages and their changes. The methodology, which is also going to be developed in Metrik, might prove to be useful when I’m starting to relate constraints and operational concepts.
  • I’ve helped in preparing our Kolleg’s second workshop on “Workflows” which gets together people from Metrik and the B.E.S.T program. We will work together with Prof. Wil v.d. Aalst and Prof. Kees van Hee on (work)flow techniques for services and service oriented architecture. I hope to get some more thoughts on how the term “service” relates to wireless (sensor) networks and their functionality.
  • I’ve continued on our leporello leaflet and it looks great – we’re almost done with it. In that process, we’ve developed some sort of “corporate identity” for Metrik. Together with a decent web strategy, we’re making our way up in the Google rankings.

That’s it for now. I need to prepare my PhD topic presentation…

activity report

This is a new series of blog posts that has the purpose of keeping me posting once a week and documenting my progress more tightly. Looking back on the past few weeks was rather disappointing in that sense: I am doing little on my research and too much on university management skills. I hope that things change once I have to re-read the little progress I’ve made. Here we go.

  • I’ve isolated the topic of understanding the ideas of ‘service’, ‘service oriented architecture’ and related to that ‘service level agreement’. The main reason being that everybody speaks of SOA and the related terms that one is inclined to think that these are well-understood topics. Unfortunately, if you think about applying SOA on, and creating Services for Wireless Sensor Networks, you end up with nothing to start with. ‘Service’, ‘SOA’ and all other sorts have been defined in the field of business process management and workflow management with lots of hard- and software technology. But people keep stressing that SOA is an architectural paradigm. I haven’t found a non-technological definition yet. That’s what I’d like to understand, What is SOA? What is a Service in SOA?
  • I’ve deepened my understanding of flexible workflows and adaptivity concepts and workflows. I still appreciate van der Aalsts classification of fexibility and adaptivity of workflows. And I started to understand how one could realize flexible workflows – thanks to Sadiq, Sadiq and Orlowska.
  • I’ve continued to supervise four students in a tutorial project related to a lecture on information integration. It’s strange how the perspective on the matter changes once you’ve earned a degree. I can’t be anything else because ‘my’ students are of my age, studied even longer and have industrial experience. Still they are rather reluctant to solve the tiny tasks I am issuing.
  • I’ve continued planning a small workshop for my group next april and another small workshop with visiting researchers this december.
  • I’ve spent a day at the GeoForschungsZentrum Potsdam with my PhD graduate school to learn about the Tsunami Early warning system in the Indian Ocean and further research topics on natural disasters and disaster management projects.
  • I’ve been working with some of my fellows on a leporello leaflet about our graduate school to improve outward communication. We’re doing pretty good things. You just realize that when you’re writing things down in a compact way avoiding the unnecessary talking, condensed down to the facts in a lean and clean argumentation.
  • And I’ve learned about simulating the distributed detection of earth quakes in the SAFER project that aims on building an early warning system by the help of sensor networks. They are our closest partner project and we are likely to get a decent amount of input regarding technical requirements to implement a reliable system to react on earthquakes or other unpredicted hazards.

Altogether, that’s been pretty much stuff. Yet, I can’t feel progress. I hope that’s going to change with this column.

go with the flow

It’s now for about five weeks that I have done almost nothing on my thesis. There were just too many other “important” things to be done. Like creating a poster for a workshop, reviewing papers and other peoples’ thesises, preparing a lab-tutorial for a lecture and going on seminars…

At least the seminar gave me the opportunity to talk about my thesis topic for at least twenty times, each time to a different person. With the questions a got back to be answered, I realized that the topic is going to be quite ambitious. “Workflows in a disaster management system.” I still have absolutely no idea what that could be. Talking to a good friend of mine last weekend, the idea formed that I really should go out there to the people who actually do disaster management. They certainly will know their workflows (maybe they don’t use that term, but they’ll know them). The question for them is: are they going to like a system that supports them by telling who is going to do what and when? My question is: are their workflows interesting enough?

Besides that, I just re-checked my project outline. That one actually says that I shall also focus on workflows of a resource-managment-layer in a peer-to-peer-middleware architecture. Is there a difference in these two tasks or is it just the same? I’d prefer if there were no difference. Workflow is workflow, right?

the gap in between

Roughly five weeks have passed since I officially started the work on my PhD. In between I attended a summer school on the convergence of some quite-hype technologies like wireless sensor networks, RFID and peer-to-peer techologies, organized by people from the TU Darmstadt (DVS and KOM). I’m working on a paper about this with a bunch of really nice people. It’s gonne be interesting. I also spent some days in Eindhoven with my group from HU Berlin talking about the research in workflow modelling and analysis and starting a cooperation on common research interests.

In the last three weeks I realized that there is some hap between the idea of having a distributedly designed infrastructure for disaster management and designing applications for these things, which is meant to be the topic of my PhD thesis. The meta-problem seems to be that the these ultra-cool wireless sensor networks won’t be doing much more than sensing and smart routing of data. Not much of a workflow there. But if workflows are meant to be used in a setting with tens or hundreds of thousands of nodes and interconnected devices to assist in disaster management and recovery, where are they going to appear? And assuming we have an answer for that: who or what is going to execute or enact them? The answer to the latter problem is quite likely: some central node with lots of computing power. But we already know how that works (more or less) and there won’t be a new research challenge for me.

Without forgetting about the second question (still hoping for a more challenging answer), I am currently turning to the first question. I might answer it with these new questions:

  • What is a workflow in a disaster management system?
  • What does it look like?
  • What makes it different from other workflows?
  • Do we need new formal methods to do so?
  • How does self-organization and adaptivity affect these workflows?

One thing that is quite likely to be different and which is a little bit out of focus in the current workflow research are data and resources, which are a crucuial thing in disaster management systems at least. Could be a starting point.

the sketchy notebook

I consider this blog to be a documentation of thoughts and results that are worth noting. In between two worthy notes, I will have many, many thoughts not so worth to read. But I need to document them and I hate killing trees. So I set up a public Google Notebook to collect all my sketchy scribblings. The link to the notebook is permanent in the sidebar (go and find it). So in case you think you have time to read that… now you can.