September 23rd, 2008
Published on September 23rd, 2008 @ 04:43:58 pm , using 969 words, 4485 views
This one I owed it since a long time.
In the year of 2000, David Cavallo published his article "Emergent Design and learning environments: Building on indigenous knowledge". In this article, the term "Emergent Design" was coined, as an alternative of blueprints. Cavallo indicates that many analysts researching the situation of educational change projects failing, was actually because of the committee-generated blueprint that was created to drive the change. The result was always a reformed implementation of the blueprint that did not solve the problem.
So, Cavallo proposes another "way" of doing that process. He bases his discussion in two innovations: The Digital technology and the Administrative decentralization. Se second one does not change the management practices, but only lowers them to other levels in organization to be distributed.
Now, Cavallo proposes some principles to guide this. Constructionism (Papert, we build knowledge when we construct things), that is based on Constructivism (Piaget, we build upon what we already know). The other ones are Technological Fluency, Applied epistemological anthropology, immersive environments, critical inquiry, long-term projects and Emergent Design.
The idea was to allow children to be immersed in long period projects, so they can handle fluent technological language. With that, the students will start looking with criticism the new understood world, and using the epistemological anthropology the teacher may be able to detect the learning forces of the students freedom and design "on-the-fly" the next learning framework to explode the possibilities.
All this is handled using the emergent design approach. The learner focus on building new knowledge upon the existing knowledge, and the tools and guides are given to facilitate that. Thus, there is no preexisting path, but one that is built as it goes on. It is pure Discovering by Construction.
Now, that is a learning experience Cavallo says it is not the only one possible. It was mapped to software development by Kent Beck, and it is presented as an alternative to blueprints (that is, the dreaded BDUP).
Emergent design is built on top of a Make-Verify-Improve life cycle. Since the new world is not created yet, the developers usually give baby steps, one step at a time. Decisions are not taken at earlier stages, but a just-in-time approach is followed, and developers are the one making those decisions.
This can prove good points:
- Avoids Prediction. Decisions are taken upon knowledge acquired by constructing, when the developers know that it works, and at the time it is needed.
- Verifiable products. The created products are real, working, and thus they can be validated against the requirements. New development is done on top of this real thing and not on top of paper ideas.
- Decisions are backed up by information obtained from feedback of real products and not assumptions.
Still, there are a couple of bad points to mention:
- Refactoring vrs Redesign. Note these are two different things. Refactoring is about improving the code after it works, redesign includes changing what it is done to use another approach or structure simply because the first one does not work. So, emerging design may not be about creating and then improving, but creating and then recreating.
- Late decision at Macro level. The problem with too late decisions are that the ones that affect a level higher that code may require much more amount of rework, recreating functionality in another way. Remember, refactoring is work to improve, rework is work to make it work as it should.
- There is certainly more information (a developer immersed knows more about his local environment than a designer), but there is less higher view to make the correct decision. That is, the grass-level view Cavallo explains work for individuals creating a single knowledge, since that knowledge is ok to be incomplete (there are lots of data the learner will not know). When you have individuals working together, you need a second level view to understand how all the individuals are doing matches together.
- The validation is locally centered. When individuals validate their decisions taken upon the current local value, the isolated solution may work. When integrated, the correct parts may not add up to an integral sound structure.
- It is assumed the individual will make the correct decisions based on information, which requires a long time to acquire (Cavallo’s long process). Thus, initial development is a challenge since no individual has enough info to make certain decisions.
So, those and other pitfalls can be avoided by taking care of tactical and strategic decisions:
- Use guidelines, like principles, to driven the decisions in case of doubt.
- Create from the bones. That is, create on top of a secured and sound base that explains by itself the rationale of the solution. That may require to bring some critical decisions Up Front, always knowing those decision may be changed in the future.
- All decision has to have a rationale that supports it. The idea is that later on, when the decision proves to be wrong, the rationale may help to chose another idea.
- Multilevel Coding. I know this is something not much people understand, but actually defining the global structure, then defining each component using a tactical structure to bind the bones, and finally writing the code to add meat to them, is what I call coding at levels, since every description of the solution is code. This helps a lot to visualize when and how a decision of any level can be taken or improved.
Hope this helps to clarify a myth about Emergent Design: It is not an easy two step methodology of blindly doing and then step back to see what emerged from the disconnected decisions. And, it may not be good for all projects not for all people. Lastly, it not an alternative to architecture, but just another technique to construct the architecture itself.
September 22nd, 2008
Published on September 22nd, 2008 @ 01:17:04 pm , using 60 words, 428 views
Back again after several months. Sorry, I have tons of posts to write, but also too much work and a baby is coming too! So, my wife has the same priority as before, on top of all, but now she has also more of my time.
Let's see if I can keep a better pace writing the ideas!
April 26th, 2008
Prior the Internet days (some people talk about prior television), we had different villages, local ones, with no communication. Now, we live in a global village. We need to communicate and share knowledge, even in different languages and cultures. Furthermore, we learned it is not just us and the "my limited world", but any thing we do affects the globality.
Now, when working on a real world problem, working to find a solution of course, we need to communicate and all or our work is to complement the one from others. There is no one single person that fights all the villains and the rest of the team observes and cheers.
Solving a problem with a software intensive system, involves working with all parts of the system: Hardware, Software, people and processes. No one lives alone with its computer, everybody interacts with all the other components of the system, and the developers are part of that system too!.
Making sense of that, the strategy of working only in my local line forgetting there is something beyond that, goes back to the local world and denies globality. Does it mean you need to work all at the same time? No. It means you work in your local line knowing there is something bigger of which that line is an important part.
What does this matter in Steve Yegge’s post about NOOBs? Well, going extreme, the post indicates there is much waste, fat we need to remove to be healthy. The comments, even the modeling and describing language structures are needless for the seasoned, non-noob programmer. I agree, as long as the programmer is working in a system that has only two components: the programmer and his lines of code.
Unfortunately, that is not the reality. A system not only requires the work of many other components, some non-software and some non-programmers, but also requires all those components to communicate at design and execution time. Robert Martin once said that one of the software functions is actually communicate with people, readers that may have not participated in the development. Does that mean the comments should stay? Not exactly, but that the code itself should be readable, understandable.
One way of doing that, for the development group, is using sentences that make sense and do the function (you can scramble the sentences and do the same thing but being not readable).
Globalization in the System means all components will need to communicate and work with other components. Creating the product with those characteristics, because they are needed, is not being NOOB, but being aware of the fact the software is not an island.
April 26th, 2008
Published on April 26th, 2008 @ 06:31:45 pm , using 448 words, 1097 views
During all these discussions, in this blog and other forums, I have found some semantic dissonance. This means some people use the same word and even the same concept, but actually understand different things from it.
So, words like architecture and design mean paper to some and process to others; they may be normal, unnecessary or even offensive. This is a little post that tries do define some of the words I use, just to be sure anybody that reads me understands what my meaning is.
Design: As a verb, is the process of decision making towards the creation of a solution. As a noun, it is the one word name of Design Description.
Design Description: The description of the decisions taken toward a solution and their rationale. It can be a document, image or even multimedia.
Code: Sentences, written in a particular language, that describe the solution. It can be at any level, and may (or not) be executable.
Strategic Design: Design that focus on architecturally significant issues, whose majority are global decisions described in an intensional way.
Tactical Design: Design that focus on tactical issues, whose majority are local decisions described in an intensional way.
Micro Design: Design that focus on operational issues, whose majority are local decisions described in an extensional way.
Domain: Is the set of concepts and relations that define a reality. It is not a description, since those concepts may be infinite or unbounded.
Abstraction: Is the result of abstracting: process of elimination of detail without loosing the essence of the concept being abstracted.
Model: Abstraction of a reality from an specific domain.
Modeling: Process of describing a possible reality in an abstraction level.
Architecture: System Structures, the externally visible properties and the relations between them. NOT PAPER.
System: Independent structure of components interrelated.
Software Intensive Systems. Systems where the majority of the components are software based ones.
Solution: Part or whole of a system that solves a problem from a domain.
Meta something: Meta means beyond, above of, self-referring. Using those concepts of the suffix, we may say meta-modeling is modeling models. Meta-data is data about the data. Meta-language is a language that allows us to describe languages. Meta-comments are comments about the comment. And so on.
Now some notes: from the above, we can say interesting things. For instance, modeling is not a paper generation process, but a description process. Modeling can be use to describe a solution, thus it can also be interpreted as coding. And code, it is not just writing executable lines (in fact, they are not executable, they need more work).
When time and other discussions require it, I will be posting more glossary terms.
March 30th, 2008
Published on March 30th, 2008 @ 05:59:52 pm , using 791 words, 3189 views
Paul pointed me to a TSS post I missed. Steve Yegge posted some rants about complexity and got to a conclusion about the how good and clean are scripting languages (Portrait of a NOOB ).
I couldn’t post a comment to it, since it was already closed to comments. So I guess I will answer here.
In this first part of a response to that blog, I will reproduce the post I did to TSS. Probably nobody read it since I posted it one month later.
First, let’s summarize Steve’s post (or what I understood of it):
1. Comments are metadata. Metadata is of no use for the compiler, so it can be get rid of.
2. Noobs are comment obsessed creatures, thus metadata enthusiastics.
3. Modeling is a way to comment intention of code in code language itself. So modeling a metadata creation process.
4. Teenagers are obsessive modelers.
5. Classes and in general OO, are metadata in code itself.
6. Static typing is also metadata since the real thing is memory afterwards.
7. Seasoned Programmer is one enlighten creature that does not use metadata, and thus can see the real thing when coding, not ethereal domain models.
What do I agree with:
Metadata is useless for the compiler. True.
Modeling is a metadata creation process. True.
What do I disagree with:
Metadata (at least not all) is not useless. Metadata is important, although not for the compiler.
Probably one of the things Steve Yegge misses in the post is that a language has much more than just telling the compiler what to do. In the DSL post, we talk about languages oriented to one specific domain, using vocabulary and grammar of that domain only. The idea is languages are there to help you solve a real life problem, not to help the compiler. Languages are for you (and your peers), not for the machine. And maybe not all languages are meant for a computer to read them.
Furthermore: Lack of metadata does not mean better code.
Pages: 1 · 2