March 23rd, 2008
Published on March 23rd, 2008 @ 08:32:40 pm , using 421 words, 2724 views
In a presentation by Cedric Beust and Alexandru Popescu at QCon San Francisco, about "Designing for Testability", called "Next Generation Testing", and in his post, Cedric talks about "Test Driven Development" (TDD), and does not talk to promote it.
Actually, he does not talk against it, but simply questions about using it, as if it is a good idea or apply-able to all cases. Cedrics points several problems of using Test-Up-Front technique, and one of his lines is: "Promotes micro-design over macro-design". Follow me on this...
Let's change the last D from Development to Design. A Test Driven Design is that one that is accomplished by basing the decisions on the test (The principle may read: follow the option that is best testable). No one is saying when to test, nor if you create the test before or after the actual coding. We are actually thinking on design, and taking into account one -ility: the Testing one.
Now, from my post about the design levels, (and here is what drove my attention over that Cedric's line), the term "micro-design" is defined as the design decisions the developer makes during the implementation of the "tactical design". Those design decisions should be in accordance to the tactical design, and those should support the strategic design.
Here, the question is now if the Test Drive Design is driving a design level that is not the micro design! Tests, if we talk about Unit and Functional tests as I think TDD means them, are a developers level thing. They, in this case, should drive Micro-Design only, and should never drive other design levels.
Let me explain it in other words: TDD shouldn't tell you what classes to use, which artifacts (queues, files, etc) or which relations between functional units there are. But it may help you decide which call sequence to use, which functions should a class have, which structures to rely on. Those are micro-design decisions, that may later on be refactored.
TDD, used at micro-design level will help define interfaces, improve cohesion and even enforce encapsulation and information hiding (if used correctly, of course, since we can create tests that violates all those best practices). But that is when the developer is creating the operational level descriptions (code). There should be a tactical design in place before doing this. And there should be an strategic design in place before that. We can use the testability to drive some decisions at those levels, but they cannot be the main -ility at all.
March 16th, 2008
Published on March 16th, 2008 @ 07:10:14 pm , using 320 words, 434 views
Working with developers, I often found them thinking of design as a generic word to indicate "sketch the classes you are going to create".
Of course, I don't share that definition. I think of design as:
"composing the sentences, in a particular design language, that describe the solution of the problem"
That said, I add that there are several design levels and types. That is, you may find out you are designing when you think you are doing something else!
Amnon H Eden and Raymond Turner had some publications where you can read about a proposal for an ontology (using the Intention/Locality). Part of their definition is that the software design should be divided into three categories: Strategic, Tactical and Implementation.
Of course, Strategic design sentences are those for Architectural design, where decisions will impact the whole. Tactical sentences are what we have always call the design, where the focus is local to one part of the problem, and that is abstract enough to be variable. Then the Implementation are sentences that are local, but also the actual execution.
That last one I call Operational design. I also call it Micro Design. That design is the one the developer does when creating the sentences that define the executable solution. Note that the decisions here are also important. Note also there are design patterns to this level too, that are different depending on then selected language (those are called Idioms).
There is another point to this definition of design: We can describe a solution at different levels, and it will be the same solution. The good point here is that each level is an abstraction of the lower level, where details was taken out of the way to clearly see broader requirements. Not all designs can be executed, and not all levels may have all the required details to be able to execute as it should.
March 11th, 2008
Published on March 11th, 2008 @ 07:30:36 pm , using 604 words, 2646 views
Back after several months!
It is incredible the amount of work a change in your job position brings in its tail!
But for now, let's talk about the title.
I was recently reading a great blog from Michael Stal (which I'm adding to the interesting blog's section). The latest post is referring to Domain Specific Languages (DSL).
It is interesting, the list of examples of what can be considered a DSL. Being as attached to formality as I’m, my take is on the D and the L of the acronym.
First, it should be Domain driven. That means this “tool” will work inside a domain scope, referring to the elements that make sense inside that domain. If used as a communicator (as I will explain later) the reader may have knowledge in that domain. For instance, if you explain your solution to a manager in terms of resources, time and cost.
The second is the language constraint. That is, DSL is a language: a grammar and a vocabulary (plus maybe rules to expand that vocabulary). The language is used to describe things in sentences. Those things could be the problem or even the solution. That description can be read, understood and even executed.
I recall a very old issue of Dr. Dobbs that talked about mini-languages (or was it micro-languages?) that were script languages in essence. An example was the MS Excel macro language of that time. Specifically, the macro language had the typical constructions for conditions and loops, but the vocabulary was focused on cells, rows and columns, selections and texts, formulas and styles. The Domain was very explicit. It also mentioned another languages, some derived from C. those where subsets of the C language with some tweaks. Were those DSLs too?
Pages: 1 · 2
July 20th, 2007
Published on July 20th, 2007 @ 12:47:55 pm , using 484 words, 19498 views
W3C has published the Web Services Description Language (WSDL) Version 2.0 Recommendation. That means we can talk about a new standard now.
From history, you can recall SOAP was not a standard by committee but a de facto one, and WSDL 1.1 was a member submission done by Microsoft and IBM. Seven years later, WSDL 2.0 comes to see the light.
I mentioned before in this blog that WSDL offered four types of messaging, a combination of style/encoding parameters. There was an RPC style, and a Document Style, and for a backward compatibility, RPC could be Encoded or Literal.
Also, WSDL allowed a extension to define the transport, but it was just to say we were going to use HTTP, and that to send a SOAP structure as its payload. It was fixed to use the POST method, and it was also fixed the headers contents and such. Not much flexibility.
Well, WSDL 2.0 changes that. It now offers only one style: document. With that you can simulate your RPC calls if you want. Also, it allows to define the content of the document using an XML Schema, but that doesn't mean you will be sending XML through the wire (more on this later)!
But it doesn't stop there. WSDL 2.0 expands significantly the HTTP binding spec. Now you can specify if using GET instead of POST, indicate if you want you parameters (defined in the schema in types, remember?) to be encoded in the URL! Even, you can indicate that you want multi-part messages!
How friendly is that to REST? Keep on reading...
Pages: 1 · 2
June 27th, 2007
Published on June 27th, 2007 @ 07:43:49 pm , using 470 words, 3883 views
One of my students recently asked me a question ORM related. He had this little project he was working on to present in another class, and there was a little screen where he had some data. Let say he had a table with clients (relational, you may guess), and the client had a reference to the company table, since a client belongs to a company. Of course, the reference field in database was a Company ID.
His question was very simple, although very useful to explain some concepts on the spot. He was, of course, converting that client row into an object, to be later displayed in the little window. So he asked me: “Should I add a company ID attribute to the object, or should I add the company name directly?”. I looked at him and asked him back: “How do you fell adding that ID to the object? Is it natural?” He glanced at his computer and then said… “No.”
Then we began to discuss. Actually, we started by defining what was that object’s function in the application, what was its role, what was it there for. The answer was very simple: it was there to provide info to the screen to display the data. Then, we went into discussing what the user needs, and the answer was the user needs the company name, not an ID. Lastly we went into discussing if we needed the to objects, one Client object with a company ID and one Company object with a matching ID and a Name. We found out that, having those two objects, will require us to match them. That is a relational operation that was natural to the database. So, it was obvious we better let the database do its job and send us a new hybrid object with the client information and the company name already in place.
But wait a minute: It that ok? Weren’t we breaking the relational rules and normalization of the data? Thinking a couple of seconds I would say no. We weren’t.
Data is data, and it is stored and related in the database. You can process and produce new information by manipulating that data using the DB tools. That new information is not to store, but to consume by the business logic layers. That is BL information, it is not simple data anymore.
So the idea of ORM one-to-one does not fit in this example. If we have data and we have tools to process that and transform it into business information, then we should allow the BL layers to consume that BL info and not to spend cycles in processing by hand the data. Now the objects model the reality, not the data structure, and we work with cooked information and not raw data.