Photos of a Giant Squid

Some Japanese photographers have managed to get the first photographs ever of a giant squid in its natural habitat. Cool! Don’t miss the slide show at the left of the page.

Santa and Contradictions

Perhaps you just happened to notice that my previous "proof" that Santa exists could be used for proving other things, too. For example, that Santa doesn’t exist. Or your children have already proved that they don’t need to go to bed.

OK, so what’s the catch? It seems possible to prove anything using this method, even contradictions. Is Mathematics inconsistent, after all? Well, no. It’s just that we’re not used to definitions causing contradictions. This is something that mathematicians realized at the beginning of the 20th century, when they investigated the foundations of mathematics.

For example, Bertrand Russell found a "paradox", when he postulated a set X that contained all sets that didn’t contain themselves. Which leads to a contradiction: Suppose X contains itself. Then X can’t contain itself, since it’s a member of X. Then suppose X doesn’t contain itself. Then X contains itself, by definition! So both cases give us contradictions. The conclusion Russell didn’t draw from this (I think) is "So X isn’t a set". Just to be extreme, suppose that S is a set that does contain 0 and does not contain 0. Anyone surprised that we get a contradiction from that? I guess not.

So, let’s check where S in the Santa example leads us. S is defined as "If S is true, then Santa exists". If S is to make sense, it must have a well-defined truth value; either true or false. We’ll check: can S be true? OK, then it seems to be the case when Santa exists, because if S is true, certainly Santa exists. It’s a logical possibility. But if S is false? Then the left-hand-side of the logical implication described by S will be false, and the implication itself will be true. Which means that S is true. But S is false! So the definition of S implies that S has to be true, and that Santa exists!

But if we view definitions as equations, things make sense. The definition of S is really an equation, which only has the solution "S is true". Other definitions have no solutions (like the set of all sets not containing themselves), and other might have several.

Christmas is Coming

Just a few months before Christmas! But be prepared when your children start asking you whether Santa really exists or not. It’s not as easy to convince them as it once was. The solution to convincing today’s enlightened children is of course to be very rigorous. We need to prove to them that Santa really exists.

So, let’s be pretty formal, and assume that S is the sentence "If S is true, then Santa exists". That’s just a definition; nothing unusual going on. Seems that if we prove that S is true, then we’ll be done. But we’ll see. Now, the actual logical proof starts.

Suppose S is true. This is just an assumption.
By the definition of S, we can just replace S by its definition, and we get
"If S is true, then Santa exists" is true. Well, not much gained yet. Probably we’re just warming up. But we can in fact use the assumption, "S is true" once more, together with that. Then we get "Santa exists". Not bad! But this is of course only because we assumed that S is true. So we’re not there yet. Let’s summarize what we got from the assumption:
"If S is true, then Santa exists". OK, well, this is the same as what S itself says. Finally something; we’ve proved S itself to be true!
But wait, if S is true, and "If S is true, then Santa exists" is also true, then obviously Santa exists. Done!

So, just sit down together, the whole family, a few days before Christmas, and carefully go through this proof, and you have removed one uncertainty from the celebrations. Also you need to know that there are also grownups who haven’t understood this fact yet.

This is my contribution for the people out there who still want to celebrate that old-fashioned Christmas!

(The proof freely from Boolos and Jeffrey, "Computability and Logic".)

Why Software Sucks

Scott Berkun has written a very nice essay on "Why Software Sucks (and what to do about it)".

The Common Sense Behind the ATAM

I thought I’d better say a few things about the Architecture Tradeoff Analysis Method, too. It’s really built upon common sense (for an architect, right), even if I can’t really judge if the building itself reaches way too far above the clouds. For a small or even medium sized organization, the answer is definitely yes. However, that doesn’t matter. The pieces of common sense behind it are good, and somewhat nontrivial. My idea is that anyone doing architectural work could benefit from those pieces, regardless of whether you run the actual method or not.

First, I’ll readily admit that I haven’t even understood the complete ATAM. I’ve only read and understood an old overview paper, and it seems that the method has evolved a lot since 1998, when it was written. Perhaps I’ll come back with corrections when I’ve read all about it (if I ever will). I promise only to tell you things that make sense to me, anyway! If you’re annoyed by this, just pretend that the paper has just been published! ;-)

So, what is it all about? Actually, the abstract of the overview paper says a lot. Here it is:

This paper presents the Architecture Tradeoff Analysis Method (ATAM), a structured technique for understanding the tradeoffs inherent in the architectures of software intensive systems. This method was developed to provide a principled way to evaluate a software architecture’s fitness with respect to multiple competing quality attributes: modifiability, security, performance, availability, and so forth. These attributes interact—improving one often comes at the price of worsening one or more of the others—as is shown in the paper, and the method helps us to reason about architectural decisions that affect quality attribute interactions. The ATAM is a spiral model of design: one of postulating candidate architectures followed by analysis and risk mitigation, leading to refined architectures.

OK, that makes sense. If we add another server to increase availability, we increase the cost, and perhaps decrease security, if we aren’t careful. Perhaps we have to co-locate lots of code in order to increase performance, thus making the architecture less modifiable. ATAM is a method for making these tradeoffs explicit, and to have a structured (iterative) method of getting to a software architecture that satisfies all the requirements on those properties.

It’s important to note that ATAM itself does not include ways of assessing modifiability, performance, security and all that, sub-methods, such as the SAAM (or perhaps common sense) are used for obtaining those attributes. It’s really a "meta-method". But never mind, we’re not really interested in formalities of the method itself now.

I suggest that we dive directly into the steps of the method; they aren’t that difficult to understand.

  1. Collect use cases that should be supported by the architecture, and requirements that the resulting system should satisfy.
  2. Construct a nice architecture based on what you got in the previous step.
  3. Analyze all the relevant properties (or attributes, as the terminology goes), such as modifiability, availability, performance and so on.
  4. If all the relevant properties were good enough, we’re done, and we can proceed to design and implementation! (But if you’re a bit curious, you could actually go on anyway, for a round.) Otherwise, we know that we need to modify the architecture in order to improve upon one or more attributes.
  5. Look at several (sensible) ways of modifying the architecture, and see how the properties of the architecture changes. For example adding a server might increase availability and cost; like that. The properties that actually changed significantly are noted as sensitivity points.
  6. Look at what you got in step 3. Some of the changes you made to the architecture are likely to have affected more than one of the attributes. For example availability and cost, for adding a server. Those changes (scenarios, perhaps?) are noted as tradeoff points. Those are the points where we have to be careful when changing our architecture. Perhaps the properties we have to improve upon are connected to lots of other attributes in this way?
  7. Now, we use the knowledge about the tradeoff points found in the previous step, and redesign the architecture so that we believe that we’ve come closer to satisfying the requirements on attributes. The tradeoff points simply serve as guides for us here. For example, if your company has no budget for new hardware, perhaps you have to have another way of establishing the availability requirements than adding another server.
  8. Go back to step 3.

OK, this really looks like common sense, all of it. Probably, if you’re an architect, this is what your brain is already doing, or at least something like it. But anyway, I think we can gain a lot from this kind of "formalized common sense".  We can use it when it comes to communicating this kind of knowledge to others, and also if we want to check ourselves to make sure that we’re reasoning in a sound way (perhaps at times when you’re working a lot of overtime, and aren’t fully alert!). Sometimes our brains aren’t as accessible as we’d like them to. :-)

Discussion about Architecture Astronauts

Don’t miss the nice discussion at Joel’s forum on my previous posts about software architecture, in relation to "Architecture Astronauts"!

How to Find the “Right” Architecture, Part III

This is the last post in a series of posts highlighting the Software Architecture Analysis Method, SAAM. The two previous posts are here (part I) and here (part II); please read them first!

We’ve come so far as to develop a relevant set of scenarios, and we’ve described the architecture on a level of detail that’s appropriate for the scenarios. Actually, we can have several candidate architectures at this step. Since the SAAM doesn’t give you a number like "this architecture got eight points out of ten", you should in fact compare at least two architectures to see which is better. But anyway, now it’s time to map the scenarios onto the architecture(s). How do we do that?

There are actually two kinds of scenarios, direct and indirect. The direct ones are already supported by the architecture (you probably constructed the architecture to support them), and the indirect ones are not, so they require modifications of the architecture in order to be supported. So, first, we decide which ones are of which kind.

Then, for the direct scenarios, we mark within the architecture which components and connections are used by the scenario. Now, if there are marks on lots components and connections, we have low cohesion for that scenario. The functionality represented by the scenario is spread out over many parts of the architecture. Thus, we can now compare different architectures with respect to how well the functionality of each scenario is kept together. Doing this mapping of direct scenarios to the architecture description should be done with all stakeholders present (see the previous post). They’ll learn a lot about the system!

But the interesting scenarios (for software architects) are the indirect scenarios. For each of those, we list the changes needed for the architecture, for example a change in a component, a change in a connection, or an addition of a new component or connection. For example, if we’ve numbered the scenarios from one to twenty, we can draw a number for the scenario on the architecture for the places where modifications are required. If you’re a bit more sophisticated, you could also factor in some kind of estimates of how difficult it would be to do each modification, but let’s simplify. We’ll end up with an architecture description (or several) with lots of numbers on them. Can they tell us anything? Yes indeed! The "bad" thing we’re looking for is a phenomenon called "scenario interaction".

What does it mean for two scenarios (let’s just look at two) to interact? It means that the two scenarios (representing extensions to the architecture, remember?) require changes in the same component or connection. Graphically, it would mean that there’s at least one component or  connection with two numbers on it. And why is that bad? For the same reason as tight coupling is bad: two functionalities are inherently connected; there’s no separation of concerns in this case. In practice, it would mean that to accommodate both changes, two developers might have to work on the same component on pieces of code that very much depend on each other.

It could also be that two scenarios interact a lot because they are very similar. That’s a very subjective criterion, of course, but we have to take that into account, too. For example the scenarios "change the password of the administrator" and "change the password of an ordinary user" are quite similar, and will probably have interaction in almost every component.

After doing that, we need to look at the amount of interaction we got, and make a (subjective!) judgment on what architecture is better. Or perhaps just whether the single architecture that was studied was good enough. This step isn’t easy to perform, but I think it’s better to keep it subjective than to try to get some objective measure of the quality of the architecture. The method simply provides you with the relevant objective facts, which make it easier for you to decide what architecture is the better. If you have a lot of courage, involve all the architecture’s stakeholders at this step, too.

So, finally, I hope I’ve managed to explain the SAAM and why it appears to be pretty intuitive and useful.

Follow

Get every new post delivered to your Inbox.