Wednesday, December 29, 2010
Monday, December 27, 2010
STUB2: K&R around the Campfire
Fond memories of learning C.
Friday, December 24, 2010
Thursday, December 23, 2010
STUB 1: SICP & Assignment
A very brief sojourn through Abelson's and Sussman's landmark book: "Structure and Interpetation of Computer Programs".
Monday, December 20, 2010
Snow and Pandemonium
These pictures are from Heathrow on Monday morning, 20 December after a light dusting of snow paralyzed flights on Friday night and Saturday morning. Though most people were cheerful in a "what can you do about it" way; there was also anger, frustration, and tears of desperation as people struggled to gain some modicum of control over their fate. I understand that there are times when events are beyond reasonable control. But this was just a little snow. I can even understand that Heathrow and the airlines had not planned properly for unexpected cold weather. What I can not understand is the breathtaking incompetence that has left thousands upon thousands of travelers without any kind of information. The British Air website was inaccessible until earlier today. The phone lines were so jammed that when she got through on her 41st call, my daughter was actually told by an automated attendant that the hold time would be 594 minutes. (That's 10 hours). How has this once great airline fallen so far that they are reduced to having mumbling women, with bull-horns that don't work, walk through a crowd of people telling them that their best option is to somehow get to customer service desk (an obviously hopeless task). I think it may be time to consider reducing Heathrow and British-air to third-world status. I would expect this kind of treatment in Central America.
Sunday, December 19, 2010
Friday, December 17, 2010
Clean Code Course in Dallas.
I'll be teaching a three-day Clean Code course in Dallas on the 15th-17th of February. Come one, come all!
You can sigh up here: http://www.eventbrite.com/event/1019973769
You can sigh up here: http://www.eventbrite.com/event/1019973769
Thursday, December 16, 2010
test
italic bold monospaced.
This is two.
----
Robert C. Martin (Uncle Bob) | unclebob@cleancoder.com
Uncle Bob Consulting LLC. | @unclebobmartin
847.922.0563 | web: objectmentor.com
Robert C. Martin (Uncle Bob) | unclebob@cleancoder.com
Uncle Bob Consulting LLC. | @unclebobmartin
847.922.0563 | web: objectmentor.com
Tuesday, December 14, 2010
Too Lazy to "Type".
Loren Segal (@Islegal) in his blog (http://gnuu.org/2010/12/13/too-lazy-to-type/) makes the interesting point that most dynamic programs are, in fact, not particularly dynamic. Even duck-typed ruby programs would require very little change to make them static. He includes a Rack example showing this transformation. class MyMiddleware
def call(app) ... end
end To: class MyMiddleware
include Callable
def call(app) ... end
end He asks, in the end: "how often do you write Ruby code that can really not have some kind of class based inheritance type system? For me, I can’t think of many cases. How often do you pass arbitrary types to methods without at least having a well-defined set of types that can be accepted by the method?" He's quite correct that many (if not most) programs written in dynamic languages do not need to be dynamic. That with a very few simple changes we could add the hints that would make them static. And this would allow the compilers to optimize the programs in ways that are impossible for dynamic programs. However, there is a cost -- and the cost is not simply typing (and by that I mean keystrokes or LoC). The cost is _dependencies_. The simple "include" in Segal's example above is a _coupling_. That coupling adds complexity to the design of the application. It might not seem that such a simple statement would be very confounding, but the problem is not one simple statement. The problem is that nearly all our classes would suddenly require instrumentation with appropriate includes or extends statements.
Even that might not seem all that bad, but the problem is that the requirements for interfaces change. We often want to add new methods to existing interfaces. When we do, we must add those methods to all classes that implement those interfaces, whether they need it or not. If some of our derivatives don't need those methods we might be tempted to split the interface into one that has the new methods, and one that does not; but that forces us to find all the classes that implement that interface and decide, in each case, which interface it should now derive from. And of course interfaces depend on other interfaces which depend on other interfaces. And so the tangle grows. If you haven't been programming in a static language for awhile, it is easy to forget the complexities that these couplings lead to. But in the end, we like dynamic language not because we are too lazy to "Type". We like dynamic languages because we are tired of untangling couplings. ----
Robert C. Martin (Uncle Bob) | unclebob@cleancoder.com
Uncle Bob Consulting LLC. | @unclebobmartin
847.922.0563 | web: objectmentor.com
def call(app) ... end
end To: class MyMiddleware
include Callable
def call(app) ... end
end He asks, in the end: "how often do you write Ruby code that can really not have some kind of class based inheritance type system? For me, I can’t think of many cases. How often do you pass arbitrary types to methods without at least having a well-defined set of types that can be accepted by the method?" He's quite correct that many (if not most) programs written in dynamic languages do not need to be dynamic. That with a very few simple changes we could add the hints that would make them static. And this would allow the compilers to optimize the programs in ways that are impossible for dynamic programs. However, there is a cost -- and the cost is not simply typing (and by that I mean keystrokes or LoC). The cost is _dependencies_. The simple "include" in Segal's example above is a _coupling_. That coupling adds complexity to the design of the application. It might not seem that such a simple statement would be very confounding, but the problem is not one simple statement. The problem is that nearly all our classes would suddenly require instrumentation with appropriate includes or extends statements.
Even that might not seem all that bad, but the problem is that the requirements for interfaces change. We often want to add new methods to existing interfaces. When we do, we must add those methods to all classes that implement those interfaces, whether they need it or not. If some of our derivatives don't need those methods we might be tempted to split the interface into one that has the new methods, and one that does not; but that forces us to find all the classes that implement that interface and decide, in each case, which interface it should now derive from. And of course interfaces depend on other interfaces which depend on other interfaces. And so the tangle grows. If you haven't been programming in a static language for awhile, it is easy to forget the complexities that these couplings lead to. But in the end, we like dynamic language not because we are too lazy to "Type". We like dynamic languages because we are tired of untangling couplings. ----
Robert C. Martin (Uncle Bob) | unclebob@cleancoder.com
Uncle Bob Consulting LLC. | @unclebobmartin
847.922.0563 | web: objectmentor.com
Sunday, November 28, 2010
Mentoring & Apprenticeship (A reading from: "The Clean Coder")
----
Robert C. Martin (Uncle Bob) | unclebob@cleancoder.com
Uncle Bob Consulting LLC. | @unclebobmartin
847.922.0563 | web: objectmentor.com
Saturday, November 27, 2010
Mentoring & Apprenticeship (A reading from: "The Clean Coder")
----
Robert C. Martin (Uncle Bob) | unclebob@cleancoder.com
Uncle Bob Consulting LLC. | @unclebobmartin
847.922.0563 | web: objectmentor.com
Tuesday, November 23, 2010
A Certification Worth Having.
- If you spend months studying and working just for the chance of getting it; then it’s worth having.
- If other people know that you had to work your ass off to get that piece of paper, then the piece of paper is worth showing.
- If well-recognized masters scrutinize and examine you after your study and effort, and then sign their name, then that piece of paper is golden.
- If only half the people who make the attempt achieve the goal, then that piece of paper is a competitive advantage.
That’s the long and short of it. A certification worth having is one that you have to work your ass off just to get a chance of receiving.
- It should take months or years.
- It should cost a lot, just to make the attempt.
- It should be backed by experts whose reputations are on the line.
- A sizable fraction should fail or drop-out.
Monday, November 22, 2010
What Killed Waterfall Could Kill Agile.
Tuesday, November 9, 2010
Craftsman 63: Specifics and Generics.
The Craftsman: 63
Specifics and Generics
Specifics and Generics
Robert C. Martin
8 Nov, 2010
Sat, 18 Mar 2002, 13:00
I was sitting in the observation deck practicing some code katas when I saw Avery walk by.
“Avery, come take a look at this.”
Aver stopped and looked over my shoulder. “Ah, you’re doing the Stack kata.”
“Yes, this is the third time I’ve done it this morning, and I’ve noticed something interesting. Do you have a few minutes?”
Avery sat down next to me and chortled: “Most certainly and without the slightest hesitation.”
I didn’t want to get into the formal banter game that we often played, so I just said: “Do you remember one of Mr. C’s rules about TDD? The one about tests being specific and code being generic?”
“Indeed I do, Alphonse. Indeed I do. Let’s see. (Ahem): ‘As the tests get more specific, the code get’s more generic.’ Is that the one?”
“Yes, that’s the one. I always thought I knew what it meant, but this morning something kind of hit me.”
“And what was it that caught your attention, if I may ask?”
“Let’s walk through the Stack kata, and I’ll show you.”
“Very well, Alphonse, Shall I endeavor to write the first test?”
“Please do.”
And so Avery wrote the first test of the Stack kata:
public class StackTest {
@Test
public void newStackShouldBeEmpty() {
Stack stack = new Stack();
assertThat(stack.size(), equalTo(0));
}
}
I responded with the standard ritual. “OK, now I can make that fail with this…”
public class Stack {
public int size() {
return -1;
}
}
“…and I can make it pass with this.”
public int size() {
return 0;
}
“Excellent, Alphonse. Nicely done! Now for the next test, we’ll ensure that the size, after a push, is one.”
@Test
public void sizeAfterPushShouldBeOne() {
Stack stack = new Stack();
stack.push(0);
assertThat(stack.size(), equalTo(1));
}
“OK, and I can make that pass with a simple increment.”
public class Stack {
private int size;
public int size() {
return size;
}
public void push(int element) {
size++;
}
}
“Oh, well done, well done, Avery old chap…”
I interrupted him. “OK, Avery, this is what I wanted to show you. Notice that I changed the size method to return a variable instead of a constant?”
“Indeed!”
“Right, so that’s taking something very specific and making it more general.”
“Indeed. Indeed. What could be more specific than a constant? And certainly a variable, by its very nature, is more general than a constant. After all a constant has only one value, whereas a variable can have many different values. An astute observation that!”
“But now,” he continued, “I must insist that after a pop, the size should be zero once again.”
Had he really understood my point? If not, I had more to show him.
“Right.” I said. “And don’t forget to refactor the tests to eliminate the duplication.”
“Oh, I shan’t forget, Alphonse. I shan’t forget.”
public class StackTest {
private Stack stack;
@Before
public void setUp() {
stack = new Stack();
}
…
@Test
public void sizeAfterPushAndPopShouldBeZero() {
stack.push(0);
stack.pop();
assertThat(stack.size(), equalTo(0));
}
}
“OK, and now I can make this pass with a simple decrement.”
public int pop() {
size--;
return -1;
}
“Capital, old sport! Capital! But now, I’m afraid that it’s quite necessary to prevent you from popping an empty stack. I do hope that doesn’t inconvenience you too much.”
@Test(expected=Stack.Underflow.class)
public void shouldThrowUnderflowWhenEmptyStackIsPopped() {
stack.pop();
}
“Good.” I said, still ignoring his patois, “Now I can make that pass by checking for zero.”
public int pop() {
if (size == 0)
throw new Underflow();
size--;
return -1;
}
…
public class Underflow extends RuntimeException {
}
“Outstanding! Simply outstanding! You clearly have mastery over your medium. But now, alas, I must further impose upon you. You see, you must not allow the stack to exceed the specified size.”
@Before
public void setUp() {
stack = new Stack(2);
}
…
@Test(expected=Stack.Overflow.class)
public void shouldThrowOverflowWhenFullStackIsPushed() {
stack.push(1);
stack.push(2);
stack.push(3);
}
“Good. Now I can make that pass by creating the constructor and then comparing the size against the capacity in the push method.
public class Stack {
private int size;
private int capacity;
public Stack(int capacity) {
this.capacity = capacity;
}
…
public void push(int element) {
if (size == capacity)
throw new Overflow();
size++;
}
…
public class Underflow extends RuntimeException {
}
public class Overflow extends RuntimeException {
}
}
“Sheer brilliance, old sport, old coot, old sod! But enough of these mundane machinations. It’s time to make this sad excuse of a program begin to act like a stack. So when you push an element, I must require that you pop that self-same element!”
@Test
public void shouldPopZeroWhenZeroIsPushed() {
stack.push(0);
assertThat(stack.pop(), equalTo(0));
}
“I’m afraid, dear Avery, that I can make that pass rather trivially.” I cursed myself under my breath for succumbing to his banter.
public int pop() {
if (size == 0)
throw new Underflow();
--size;
return 0;
}
“Devilishly clever my boy! You parried my thrust with just a flick of your wrist! But can you flick this away just as casually?”
@Test
public void shouldPopOneWhenOneIsPushed() {
stack.push(1);
assertThat(stack.pop(), equalTo(1));
}
I forced myself to avoid the banter. “This one is going to require a variable. And once again, notice that we are replacing something specific, with something more general.”
public class Stack {
…
private int element;
…
public void push(int element) {
if (size == capacity)
throw new Overflow();
size++;
this.element = element;
}
public int pop() {
if (size == 0)
throw new Underflow();
--size;
return element;
}
…
}
“Oh, ho! Yes, again, the constant is replaced with a variable. Specifics become generics. Well done, Alphonse! Well done! But as yet this beast behaveth not as ought a stack. Therefore shall I maketh you to perform a true FIFO operation!”
Why did he add that old-english twist? Was I distracting him? Was he looking ahead to the end-game and losing concentration?
@Test
public void PushedElementsArePoppedInReverseOrder() {
stack.push(1);
stack.push(2);
assertThat(stack.pop(), equalTo(2));
assertThat(stack.pop(), equalTo(1));
}
“OK, Avery, now watch this carefully.”
public class Stack {
…
private int elements[];
public Stack(int capacity) {
this.capacity = capacity;
elements = new int[capacity];
}
public void push(int element) {
if (size == capacity)
throw new Overflow();
elements[size++] = element;
}
public int pop() {
if (size == 0)
throw new Underflow();
return elements[--size];
}
…
}
“Do you see what happened Avery? We transformed that element variable into something more general than a variable. We transformed it into an array.”
Avery just looked at the code with his brows knitted together. His normally bulging eyes bulged even further. I could see the wheels turning in his head.
“Alphonse, do you realize that you didn’t delete that size code? You just moved it around.”
“What do you mean?”
“I mean that all that silly code that we wrote at first, the code for the size variable in order to pass the initial tests. I usually think of that as throwaway code – just something to do to get the early tests to pass. But we didn’t delete that code; we moved it. We moved the size++ and --size into array subscripts.”
“Yeah.” I said. “Maybe that code wasn’t so silly after all. It certainly wasn’t throwaway. But did you notice the transformations, Avery? We transformed specific code like constants, into generic code like variables and arrays.”
“Yeah, I did notice that Alphonse, and that means that from one test to the next we were generalizing and moving code.”
“We also added code like the constants, and the if statements for the exceptions, and the increments and decrements.”
“Yeah! So the process of changing the production code from test to test is not one of rewriting so much as it is of adding, moving, and generalizing.”
“That’s cool!” I said. “It means that none of the code we write to pass the early tests is wasted code; it’s just code that’s incomplete, not properly placed, or not general enough. It’s not that the code is wrong, it’s just – what’s the word?”
“Degenerate!” Avery said. “The early code is degenerate. It’s not silly or wasted; it just young! It needs to evolve. That earlier code is the progenitor of the latter code.”
“I wonder.” I said.
“What?”
“I wonder if this is always true. Can you always evolve code, from test to test, by adding, moving, or generalizing it? Is the process of TDD really just a set of successive generalizations constrained by tests?”
“I don’t know. Let’s try the Prime Factors kata…”
Thursday, October 21, 2010
Danger! Software Craftsmen at Work.
On October 12, 2010 at QCon David Harvey gave a talk entitled Danger! Software Craftsmen at Work. This talk was nicely summarized and expanded upon by Gael Fraiteur in his blog: Why Should We Care About Software Craftsmanship? Part 2. But I have a few things to add.
Harvey makes the following points (as described by Fraiteur):
Is the Manifesto for Software Craftsmanship empty because it is irrefutable? I think the notion is absurd. That's like saying that the Hippocratic Oath, or the Golden Rule are empty because they are irrefutable. The Manifesto is not a scientific hypothesis that requires experimental verification. Rather the Manifesto is a statement of beliefs and values that the signatories adhere to and promote. The Manifesto contains articles of belief, not statements of measurable fact, and is therefore not required to be falsifiable.
Is the Manifesto irrefutable? Would that it were so! Unfortunately the Manifesto is regularly refuted in both word and deed. For example, the first article of the Manifesto states that we value well-crafted software over working software; yet there continues a significant debate on the topic of "good-enough software". There is a large cohort of software developers who contend that well-crafted code is antithetical to time-to-market.
The second article of the Manifesto is more interesting still. It promotes the steady addition of value over simply responding to change. What value does this refer to? It refers both to the value of the software's function, and the value of it's structure. That is, we as craftsman, will continue to steadily improve both the structure and function of the software, rather than simply responding to change. This kind of responsible behavior is refuted daily by the actions of developers who release changes that damage both structure and function. If you doubt that this happens, consider this video of bad code that I posted a few months ago. Believe it or not, this is deployed production code.
Engineering vs. Craftsmanship.
Is craftsmanship antithetical to engineering? Harvey suggests this possibility based on some of the statements in Pete McBreen's classic book Software Craftsmanship in which he derides some of the more egregious, high-ceremony and high-documentation practices associated with Software Engineering. Harvey suggests that this may give "permission" to budding software craftsmen to ignore the good software engineering work that has been done over the decades.
I agree that this would be a bad thing. We don't want anyone in the Software Craftsmanship community to dismiss Software Engineering out of hand. The problem I have with Harvey's suggestion, however, is that none of the leaders in the Software Craftsmanship movement espouse the view that the history of Software Engineering is worthless. Indeed, quite the opposite is true. Software Craftsmen see themselves as Software Engineers. That does not mean we accept all of the Software Engineering dogma that has accumulated over the decades. It also doesn't mean that we reject it. It does mean that we learn it.
The Software Craftsmanship community is deeply committed to learning the lessons of the past. That means studying the things we did right, and the things we did wrong. Software Craftsmen immerse themselves in their craft. We continue to read the old books by DeMarco, Yourdon, Parnas, Dijkstra, Hoare, Weinberg, and their brethren. We study at the feet of the old masters so that we can learn how to be the next masters.
It is true that we have currently tabled some of the older practices that have been associated with Software Engineering; but we do not disrespect those practices, nor the people who at one time proposed and adopted them. They were pioneers who led the way and who, in some cases, showed us the paths to avoid.
The Craftsman Connotation
Harvey advises us to take care with the metaphors we choose. He makes the point that terms like craft, dojo, kata, apprentice, master etc., can have negative connotations. The word "craft" for example, may bring to mind the kind of quality one experiences at a flea-market or a craft fair. The martial-arts terms that are sometimes common in craftsmanship circles may bring to mind the notion of the "omnipotent loner" like Bruce Lee, or Neo. The terms "Master", "Journeyman", and "Apprentice" may bring to mind the secretive guilds of the middle ages with all their ritual, mysticism, and intrigue.
I think this is a legitimate concern. I also think it's easily dealt with. The strategy I've been using is "guilt by association". When I talk about Software Craftsmanship, I also talk about Software Professionalism. I use the terms interchangeably in order to enforce the association in my listeners' (and readers') minds. When I talk about dojo's and kata's, it is always in the connotation of "practice'. I use the terms together so that there is no doubt about what the terms actually mean.
Harvey is right in that we don't want to create a "secret language". There is nothing wrong with the memes we've chosen to communicate amongst ourselves; but we have to be careful to make sure we associate those memes with concepts like professionalism, practice, and seniority that our customers and employers understand and appreciate. We want these people's support. We want them to believe and trust in the values that we espouse. We will not accomplish that by disrespecting the metaphors that they depend upon.
Harvey makes the following points (as described by Fraiteur):
- The Manifesto for Software Craftsmanship is empty of content because it is not refutable, i.e. it is not possible for a reasonable person to disagree.
- The opposition of software craftsmanship to software engineering is pointless and may even give permission to software developers to ignore the lessons of software engineering.
- Metaphors, and the language we are using to describe ourselves and our activity, do matter. The people around us think of a craftsman as someone producing leather bags, not items you can rely on. Although software developers have their own definition of craftsmanship, what eventually matters is the perception of our customers. By choosing inappropriate metaphors, we are increasing the gap between those who build software, and those who use it.
Is the Manifesto for Software Craftsmanship empty because it is irrefutable? I think the notion is absurd. That's like saying that the Hippocratic Oath, or the Golden Rule are empty because they are irrefutable. The Manifesto is not a scientific hypothesis that requires experimental verification. Rather the Manifesto is a statement of beliefs and values that the signatories adhere to and promote. The Manifesto contains articles of belief, not statements of measurable fact, and is therefore not required to be falsifiable.
Is the Manifesto irrefutable? Would that it were so! Unfortunately the Manifesto is regularly refuted in both word and deed. For example, the first article of the Manifesto states that we value well-crafted software over working software; yet there continues a significant debate on the topic of "good-enough software". There is a large cohort of software developers who contend that well-crafted code is antithetical to time-to-market.
The second article of the Manifesto is more interesting still. It promotes the steady addition of value over simply responding to change. What value does this refer to? It refers both to the value of the software's function, and the value of it's structure. That is, we as craftsman, will continue to steadily improve both the structure and function of the software, rather than simply responding to change. This kind of responsible behavior is refuted daily by the actions of developers who release changes that damage both structure and function. If you doubt that this happens, consider this video of bad code that I posted a few months ago. Believe it or not, this is deployed production code.
Engineering vs. Craftsmanship.
Is craftsmanship antithetical to engineering? Harvey suggests this possibility based on some of the statements in Pete McBreen's classic book Software Craftsmanship in which he derides some of the more egregious, high-ceremony and high-documentation practices associated with Software Engineering. Harvey suggests that this may give "permission" to budding software craftsmen to ignore the good software engineering work that has been done over the decades.
I agree that this would be a bad thing. We don't want anyone in the Software Craftsmanship community to dismiss Software Engineering out of hand. The problem I have with Harvey's suggestion, however, is that none of the leaders in the Software Craftsmanship movement espouse the view that the history of Software Engineering is worthless. Indeed, quite the opposite is true. Software Craftsmen see themselves as Software Engineers. That does not mean we accept all of the Software Engineering dogma that has accumulated over the decades. It also doesn't mean that we reject it. It does mean that we learn it.
The Software Craftsmanship community is deeply committed to learning the lessons of the past. That means studying the things we did right, and the things we did wrong. Software Craftsmen immerse themselves in their craft. We continue to read the old books by DeMarco, Yourdon, Parnas, Dijkstra, Hoare, Weinberg, and their brethren. We study at the feet of the old masters so that we can learn how to be the next masters.
It is true that we have currently tabled some of the older practices that have been associated with Software Engineering; but we do not disrespect those practices, nor the people who at one time proposed and adopted them. They were pioneers who led the way and who, in some cases, showed us the paths to avoid.
The Craftsman Connotation
Harvey advises us to take care with the metaphors we choose. He makes the point that terms like craft, dojo, kata, apprentice, master etc., can have negative connotations. The word "craft" for example, may bring to mind the kind of quality one experiences at a flea-market or a craft fair. The martial-arts terms that are sometimes common in craftsmanship circles may bring to mind the notion of the "omnipotent loner" like Bruce Lee, or Neo. The terms "Master", "Journeyman", and "Apprentice" may bring to mind the secretive guilds of the middle ages with all their ritual, mysticism, and intrigue.
I think this is a legitimate concern. I also think it's easily dealt with. The strategy I've been using is "guilt by association". When I talk about Software Craftsmanship, I also talk about Software Professionalism. I use the terms interchangeably in order to enforce the association in my listeners' (and readers') minds. When I talk about dojo's and kata's, it is always in the connotation of "practice'. I use the terms together so that there is no doubt about what the terms actually mean.
Harvey is right in that we don't want to create a "secret language". There is nothing wrong with the memes we've chosen to communicate amongst ourselves; but we have to be careful to make sure we associate those memes with concepts like professionalism, practice, and seniority that our customers and employers understand and appreciate. We want these people's support. We want them to believe and trust in the values that we espouse. We will not accomplish that by disrespecting the metaphors that they depend upon.
Sunday, October 17, 2010
The Cost of Code?
In a panel at #scna yesterday, @chadfowler asked the question: "How many projects fail because of the code?" I think the point he was trying to make was that the primary causes of project failure are business issues, not technical issues.
I posed this question on twitter earlier today. The responses came quickly, and virtually all of them agreed that business issues are the more significant causes of project failure.
It's certainly true that projects fail because of cost, requirements, schedule, management. etc. It's also true that we seldom trace the cause of failure back to something as mundane as code. So Chad's point, if that's what it was, certainly has merit.
The conclusion that we might make from this is that code just isn't very important in the long run, and the the craftsmanship movement might just be a load of Yak Shaving. Indeed, Chad asked us to consider just that in his talk at #scna. If we follow that reasoning, then we should decrease our emphasis on technological prowess and skill, and increase our emphasis on business, requirements, budgets, schedules, and management.
Before I counter this argument, let me say that I do know of projects that have failed because of the code. Indeed, I know of companies that have failed because of the code.
This isn't actually very difficult to believe or understand. We all know that when the code is a mess, it becomes more and more costly to maintain and improve. If that cost exceeds what the project can afford, the project fails. If that cost exceeds what the company can afford, the company fails. In the cases that I am aware of, this is precisely what happened. The code was simply too costly for the business model to support.
So let's try a simple thought experiment. What fraction of projects would fail if the code was infinitely expensive to produce and maintain? Clearly all projects would fail because the code would be too expensive for any finite business model to support.
OK, so what if the code cost nothing to produce and maintain? What fraction of those projects would fail because of the code? Again, the answer is clear. No project would fail because of the code, if the code costs nothing to make.
What does it mean to cost nothing to make? It means that you would have the code you needed the instant you needed it. The code would simply be there, instantly, fully functional, free of defects. Any time you needed a change, the change would instantly be in effect, fully deployed, fully operational.
So let's say you are thrown into a cave by some thieves. In that cave you find a old beat-up PC jr, complete with the IR chicklet keyboard. You pick up that keyboard and rub a smudge off the enter key. Wooosh! a genie appears on the screen and grants you the ability to have zero cost code for the rest of your life! Would any of your projects ever fail from that point on?
Remember, nobody else has your ability. Nobody else can produce the code they want, instantly, and without defect. Nobody else can make and deploy changes in zero time. So you have a tremendous competitive advantage. Is there any way you could fail? I think my dog Petunia might fail, but anyone smarter than that should become a multi-trillionaire.
If we had that magic PC jr, there wouldn't be any schedule or budget issues. The cost of mismanagement and/or bad requirements would be close to zero. So all those things that cause projects to fail would become irrelevant.
But we don't have that magic PC jr. Code does cost money to produce and maintain. But if I, as a craftsman, can invoke a fraction of the power of that Genie to reduce the cost of producing and maintaining code, then I simultaneously reduce the cost and risk of mismanagement, of bad requirements, of tight schedules, and of tight budgets. By reducing the cost of the thing that's being managed, we reduce the cost of error and increase the chances of success!
Why is it that projects fail due to bad requirements, bad management, bad schedules, and bad budgets? They fail because the cost of error is huge. Why is the cost of error huge? Because the cost of the code is so horribly large. If code cost nothing to produce, the cost of error would be close to zero.
This realization has not been lost on the business community. They tried to solve it by reducing the hourly rate of programmers. They set up horrifically expensive and risky mechanisms in order to hire programmers who lived half a world away in a wildly different culture. They faced the issues of time zones and languages, and cultural mismatch in order to reduce the cost of code. They did this because they understood that the it is that cost that drives the cost of management error. They did this because it is that cost that makes projects fail.
Unfortunately this strategy didn't work as well had been hoped. Some folks have made it work; more or less. But the majority of the off-shoring efforts have been disappointing. And so the cost of code remains high, and therefore the risk of error is also high.
And that brings us back to the question at hand. How many projects fail because of the code? The argument above suggests that all failures are a direct result of the cost of code. How many projects fail because of code? All of them!
More importantly, what is the single most effective way to increase the chances of project success? Is it improving requirements? Management? Schedules and budgets? All those things would help, but they are all secondary to the thing that truly drives project failure: The cost of the code.
I posed this question on twitter earlier today. The responses came quickly, and virtually all of them agreed that business issues are the more significant causes of project failure.
It's certainly true that projects fail because of cost, requirements, schedule, management. etc. It's also true that we seldom trace the cause of failure back to something as mundane as code. So Chad's point, if that's what it was, certainly has merit.
The conclusion that we might make from this is that code just isn't very important in the long run, and the the craftsmanship movement might just be a load of Yak Shaving. Indeed, Chad asked us to consider just that in his talk at #scna. If we follow that reasoning, then we should decrease our emphasis on technological prowess and skill, and increase our emphasis on business, requirements, budgets, schedules, and management.
Before I counter this argument, let me say that I do know of projects that have failed because of the code. Indeed, I know of companies that have failed because of the code.
This isn't actually very difficult to believe or understand. We all know that when the code is a mess, it becomes more and more costly to maintain and improve. If that cost exceeds what the project can afford, the project fails. If that cost exceeds what the company can afford, the company fails. In the cases that I am aware of, this is precisely what happened. The code was simply too costly for the business model to support.
So let's try a simple thought experiment. What fraction of projects would fail if the code was infinitely expensive to produce and maintain? Clearly all projects would fail because the code would be too expensive for any finite business model to support.
OK, so what if the code cost nothing to produce and maintain? What fraction of those projects would fail because of the code? Again, the answer is clear. No project would fail because of the code, if the code costs nothing to make.
What does it mean to cost nothing to make? It means that you would have the code you needed the instant you needed it. The code would simply be there, instantly, fully functional, free of defects. Any time you needed a change, the change would instantly be in effect, fully deployed, fully operational.
So let's say you are thrown into a cave by some thieves. In that cave you find a old beat-up PC jr, complete with the IR chicklet keyboard. You pick up that keyboard and rub a smudge off the enter key. Wooosh! a genie appears on the screen and grants you the ability to have zero cost code for the rest of your life! Would any of your projects ever fail from that point on?
Remember, nobody else has your ability. Nobody else can produce the code they want, instantly, and without defect. Nobody else can make and deploy changes in zero time. So you have a tremendous competitive advantage. Is there any way you could fail? I think my dog Petunia might fail, but anyone smarter than that should become a multi-trillionaire.
If we had that magic PC jr, there wouldn't be any schedule or budget issues. The cost of mismanagement and/or bad requirements would be close to zero. So all those things that cause projects to fail would become irrelevant.
But we don't have that magic PC jr. Code does cost money to produce and maintain. But if I, as a craftsman, can invoke a fraction of the power of that Genie to reduce the cost of producing and maintaining code, then I simultaneously reduce the cost and risk of mismanagement, of bad requirements, of tight schedules, and of tight budgets. By reducing the cost of the thing that's being managed, we reduce the cost of error and increase the chances of success!
Why is it that projects fail due to bad requirements, bad management, bad schedules, and bad budgets? They fail because the cost of error is huge. Why is the cost of error huge? Because the cost of the code is so horribly large. If code cost nothing to produce, the cost of error would be close to zero.
This realization has not been lost on the business community. They tried to solve it by reducing the hourly rate of programmers. They set up horrifically expensive and risky mechanisms in order to hire programmers who lived half a world away in a wildly different culture. They faced the issues of time zones and languages, and cultural mismatch in order to reduce the cost of code. They did this because they understood that the it is that cost that drives the cost of management error. They did this because it is that cost that makes projects fail.
Unfortunately this strategy didn't work as well had been hoped. Some folks have made it work; more or less. But the majority of the off-shoring efforts have been disappointing. And so the cost of code remains high, and therefore the risk of error is also high.
And that brings us back to the question at hand. How many projects fail because of the code? The argument above suggests that all failures are a direct result of the cost of code. How many projects fail because of code? All of them!
More importantly, what is the single most effective way to increase the chances of project success? Is it improving requirements? Management? Schedules and budgets? All those things would help, but they are all secondary to the thing that truly drives project failure: The cost of the code.
Be a good one. #scna 2010
I didn't expect it, but something profound happened at #scna this week. I expected the conference to be good. I expected it to be fun. I expected to see many old and new faces and have stimulating conversations. And in all these things my expectations were met. What I didn't expect was the gelling.
There was a meme at this conference that pervaded every talk and every session. Doug Bradbury (@dougbradbury) coined it in the title of his talk: Made to Make. His point was that we are makers. We love to make. There is something within us drives us to create. Doug opened his talk with a story from his childhood. He was eight, hanging out with his grandfather in the workshop. His grandfather saw him leaning over his child-sized workbench and asked him what he was doing. "I'm making stuff." was his reply.
Keavy McMinn (@keavy), in her talk Artist to Programmer, reiterated this meme more directly when she quoted one of her friend's tweets: "I just want to make stuff, I don't really care if its Flash or objective-C or fuzzy felt" and again later: "The future belongs to the few of us still willing to get our hands dirty". One of the most moving moments in Keayv's presentation was her description of daily refactoring a one-ton tower or bricks during an art show. She showed pictures of this tower from day to day. Each different. Each telling a different story. Each lovingly built.
Michael Norton (@DocOnDev) talked about the history of medicine, medical education, and medical certification. He placed it all on a time line and showed how medicine transitioned through phases of empirical observation, to the canonization of an initial body of knowledge, to rapid theoretical and technological development, to the intensely supervised and collaborative learning model of today. And throughout this long history and transition, medicine began as, and remains, a craft developed by people who love what they do.
I (@unclebobmartin) and Michael Feathers (@mfeathers) both gave talks about functional programming, showing us new (old) ways to make our stuff. Enrique Comba Riepenhausen (@ecomba) gave an impassioned talk about fostering partnerships with our customers, reiterating the pleas and advice from both Ken Auer and Chad Fowler (@chadfowler) reminding us that: We make things for others.
There were lots of open-space sessions about all kinds of things. Laptops were always open, Code was never far away. There were "randori" coding sessions on stage.
There were impromptu coding sessions and lessons and discussions.
Corey Haines (@coreyhaines) gave the closing talk, and it summarized the tone perfectly. The message, set amongst stories of cats, and cows, and redwoods, was simple: We are makers. We love what we do. We are happiest doing what we do. So we need to do the things that make us happiest.
Yes, there was a gelling at #scna 2010. It was a gelling around a meme. It was the consolidation of an idea. It was a group of people who found themselves to be in violent agreement over a central organizing notion.
Abraham Lincoln said it over 100 years ago. Had he been at #scna 2010 he might have gotten up on stage after Corey's talk and sent us home with the following exhortation:
There was a meme at this conference that pervaded every talk and every session. Doug Bradbury (@dougbradbury) coined it in the title of his talk: Made to Make. His point was that we are makers. We love to make. There is something within us drives us to create. Doug opened his talk with a story from his childhood. He was eight, hanging out with his grandfather in the workshop. His grandfather saw him leaning over his child-sized workbench and asked him what he was doing. "I'm making stuff." was his reply.
Keavy McMinn (@keavy), in her talk Artist to Programmer, reiterated this meme more directly when she quoted one of her friend's tweets: "I just want to make stuff, I don't really care if its Flash or objective-C or fuzzy felt" and again later: "The future belongs to the few of us still willing to get our hands dirty". One of the most moving moments in Keayv's presentation was her description of daily refactoring a one-ton tower or bricks during an art show. She showed pictures of this tower from day to day. Each different. Each telling a different story. Each lovingly built.
Michael Norton (@DocOnDev) talked about the history of medicine, medical education, and medical certification. He placed it all on a time line and showed how medicine transitioned through phases of empirical observation, to the canonization of an initial body of knowledge, to rapid theoretical and technological development, to the intensely supervised and collaborative learning model of today. And throughout this long history and transition, medicine began as, and remains, a craft developed by people who love what they do.
I (@unclebobmartin) and Michael Feathers (@mfeathers) both gave talks about functional programming, showing us new (old) ways to make our stuff. Enrique Comba Riepenhausen (@ecomba) gave an impassioned talk about fostering partnerships with our customers, reiterating the pleas and advice from both Ken Auer and Chad Fowler (@chadfowler) reminding us that: We make things for others.
There were lots of open-space sessions about all kinds of things. Laptops were always open, Code was never far away. There were "randori" coding sessions on stage.
Me exhorting Adewale Oshineye (@ade_oshineye) to "Write a Test". |
There were impromptu coding sessions and lessons and discussions.
Improptu Clojure Coding Session |
Corey Haines (@coreyhaines) gave the closing talk, and it summarized the tone perfectly. The message, set amongst stories of cats, and cows, and redwoods, was simple: We are makers. We love what we do. We are happiest doing what we do. So we need to do the things that make us happiest.
Yes, there was a gelling at #scna 2010. It was a gelling around a meme. It was the consolidation of an idea. It was a group of people who found themselves to be in violent agreement over a central organizing notion.
Abraham Lincoln said it over 100 years ago. Had he been at #scna 2010 he might have gotten up on stage after Corey's talk and sent us home with the following exhortation:
"Whatever you are, be a good one."
(Thanks to Monty Ksycki for taking all those great pictures!)
Sunday, October 3, 2010
The Craftsman 62, The Dark Path.
Fri, 17 Mar 2002, 14:00
I felt I could use the break so I walked over and sat next to Jerry.
"Sure, Jerry, what's a Kata?"
Jerry rolled his eyes. "You've never done a kata?"
I could feel my guard going up, but I tried to relax. "No, can't say I have."
Jerry smirked and then called over to Jasmine: "Hay Jaz, do you want to tell Alphonse what a kata is?"
Jasmine's long dark hair swished playfully as she turned her head to face me. She nailed me with those sparkling green eyes as she answered: "What, the hotshot's never done a kata?"
"He says not. Can you believe it?"
"Jeez, what do they teach these kids nowadays?"
"Oh come on!" I said, starting to get annoyed. "You guys are only a couple of years older than me, school hasn't changed that much."
Jasmine smiled at me, and I felt my annoyance evaporate. That smile... "Relax Alphonse, we're just poking fun at you. A kata is just a simple program that you write over and over again as a way to practice. We do them all the time. It's part of our normal discipline."
"You write the same code over and over?" This was new to me, and it didn't make a lot of sense.
Jerry nodded and explained: "Yeah. Sometimes we'll do a kata two are three times in a row, exactly the same each time. It's a good way to practice your hot-keys."
"And sometimes," Jasmine added, "we solve them in different ways, using different techniques as a way to learn different approaches and reinforce our disciplines."
"And sometimes we just do them for fun." Jerry concluded.
"Which one are you going to show him?" Jasmine asked.
"I was thinking about doing 'Word Wrap'."
"Oh, that's a good one. You're going to like this Alphonse. You guys have fun." And with that she turned back to her work.
I turned to Jerry and asked: "Word Wrap?"
"Yeah, it's a simple problem to understand, but it's oddly difficult to solve. The basic premise is really simple. You write a class called Wrapper, that has a single static function named wrap that takes two arguments, a string, and a column number. The function returns the string, but with line breaks inserted at just the right places to make sure that no line is longer than the column number. You try to break lines at word boundaries."
I thought about this for a moment and then said: "You mean like a word processor, right? You break the line by replacing the last space in a line with a newline."
Jerry nodded. "Yeah that's the idea. Pretty simple huh?"
I shrugged. "Sounds simple, yes."
Jerry pushed the keyboard in my direction. "OK, then why don't you start."
I knew this was a trap of some kind, but I couldn't see how. So I said: "I suppose we should begin with simple degenerate tests." I took the keyboard and began to type. I got the first few tests passing as Jerry watched.
Jerry got real interested as I wrote this. When I got the second test working he said: "What's all that @RunWith and @Suite stuff you are typing?"
I smiled. Apparently I was about to teach Jerry something. "Oh, yeah." I said nonchalantly. "That's the TestNest pattern. I learned it from Justin a few days ago. It lets you put more than one test class in a file. Each test class can have it's own setups and teardowns."
"Yeah, that's kind of slick. But who's this Justin dude?"
I pointed and counted ceiling lights. "He works just down the hall, beneath the 8th light."?
"You mean by those guys who are always walking on treadmills while they code?"
I nodded and kept on coding while Jerry stared back down the hall and recounted the lights.
Jerry looked back just in time to see that last test pass. He looked over the code and nodded. "Yes, that's just about exactly how I solved it the first time. That replaceAll is a bit of a hack isn't it."
"Yes, but it gets the test to pass. 'First make it work, then make it right.'"
Jerry nodded.
"Anyway, it's pretty straightforward so far." I said. And so I went on to write the next test.
Jerry nodded sagely. "Yes, that's the obvious next test."
"Yes, and with the obvious failure." I agreed. So then I looked back at the code.
I stared at it for a long time. But there did not seem to be any simple thing that I could do to make the test pass.
After a few minutes, Jerry said: "What's the matter Alphonse? Stuck?"
"No, this should be simple. I just..." In frustration I took the keyboard and began to type. I typed for quite a while, adding and erasing code. Jerry nodded knowingly, and sometimes grunted. After about five minutes Jerry stopped me. The code looked like this:
"Are you sure you're on the right track, Alphonse?"
I looked at the code and realized that I had been coding blindly. I ran the tests in desperation, but they just hung in an infinite loop. I could kind of feel what needed to be done, but it wasn't at all clear how I should proceed.
"Give me another shot at this." I said, as I erased all the code and started over. Jerry just smiled and watched as I flailed around for another five minutes or so. Finally, with lots of tests failing he stopped me again.
"What are you doing wrong, Alphonse?"
I stared at the screen for a minute. Then I said: "I know I can get this working, give me another shot."
"Alphonse, I know you can get it working too; but that's not the point. Stop for a minute and tell me what you are doing wrong."
I could hear Jasmine stifling a giggle. I looked over at her, but she didn't meet my eye. Then I took my fingers off the keyboard and hung my head. "I can't seem to get this test to pass without writing a lot of untested code." I said.
"That's true." Said Jerry, but it's not quite the answer I was looking for. You were doing something wrong. Something really wrong. Do you know what it was?
I thought about it for awhile. I had been trying to figure out the algorithm. I had tried lots of different approaches. But all my guesses turned out wrong. -- Oh!
I looked Jerry square in the eye and said: "I was guessing."
"Right!" Jerry beamed. "And why were you guessing?"
"Because the test was failing and I couldn't figure out how to get it to pass."
Now Jerry narrowed his gaze, almost like he was looking through me. "And what does that tell you?"
"That the problem is hard?" I guessed.
"No, Alphonse, the problem is not hard. When you see the solution, you're going to be very angry at yourself. The reason you could not figure out how to pass that test, Alphonse, is that you were trying to pass the wrong test."
I looked at the tests again. They seemed perfectly logical. So I asked Jerry: "How could these be the wrong tests?"
Jerry smiled with a grin that rivaled Jasper's. "They are the wrong tests, Alphonse, because you could not figure out how to pass them."
I gave Jerry a stern look. "You're being circular, Jerry."
"Perhaps I am. Look at it this way. The test you are trying to pass is forcing you to solve a very large part of the problem. Indeed, it might just be the whole problem. In any case, the bite you are taking is too big."
"Yeah, but..."
Jerry stopped me and said: "Did you ever read The Moon is a Harsh Mistress Alphonse?"
"Uh... Heinlein, wasn't it? Yes, I read it a few years back. It was a great story."
"Indeed it was. Do you remember this quotation?"
"As a matter of fact, I do. I thought it was very profound.""[W]hen faced with a problem you do not understand, do any part of it you do understand, then look at it again."
"OK then Alphonse, apply that here. Find some part of this problem that you do understand."
"I understand the problem..."
"No, you think you understand the problem, but clearly you don't. If you understood it, you'd be able to solve it. Find some simpler tests to pass."
I thought about this for a few seconds. What was so hard about this problem? The thing I'd been struggling with was how to deal with breaking the lines at spaces? Each of my "solutions" was tangled up with hunting for just the right space to replace with a line end.
I looked at Jerry and said: "What if I solved the part of this problem that did not deal with spaces? Lines that have no spaces only need to be broken once they've hit the column limit."
Jerry pointed at the keyboard, and I started again. I wrote the same degenerate tests.
But then I changed tack and wrote a test that wrapped a line without spaces. That test was trivially easy to pass.
The next test was pretty obvious. It should continue to wrap a string without spaces, creating lines that are no longer than the column limit.
Jerry looked at the test and nodded. "How will you solve that, Alphonse?"
"I just need to put a loop into the wrap function." I said.
"I think there's an easier way." He said.
I looked at the code for a bit, and then said: "Oh! Sure, I could recurse."
The tests passed, and Jerry nodded approvingly. "That looks like a framework you could build upon. What's next?"
"Now that I can wrap lines without spaces, it ought to be easier to wrap lines with spaces!"
"Give it a shot." He said. So I wrote the simplest test I could. A space right at the column limit.
"Do you remember how you made that test pass last time?" Jerry asked.
"Yeah." I grimaced. "I use the replaceAll hack."
"Is that how you're going to solve it now?"
I looked at the code, and the answer was obvious. "Of course not!" I exclaimed. "All I need to do is check to see if the character at the column limit is a space!" and I wrote the following code.
"Why'd you put that wrap call in there?" Jerry asked. "You're getting a little ahead of yourself, aren't you?"
"I guess, but it's kind of obvious that it belongs there. Just look at the symmetry!"
"I agree." Jerry said smiling. "Continue on."
The next test was just as obvious. The space should be before the column limit. So I typed the following:
"Passing this one is going to be tricky." I said under my breath.
"Is it?" Jerry queried.
I looked again, and it hit me. "Oh, no, it's just a small change!" And I typed the following.
The tests passed, and I was getting excited. "This is so strange, the whole algorithm is just falling into place!"
"When you choose the right tests, Alphonse, they usually do."
"OK, so now let's make the column boundary really small so that it has to chop the string up into lots of little lines."
"That one passes right away!" I said. Wow, I think we're done.
"Not quite." Jerry said. "There's another case."
I studied the tests. "Oh, there's the case where the character after the column limit is a space." I wrote the tests, and it was trivial to pass.
Jerry smiled as the tests passed. "That's the algorithm all right. But I bet you could clean this up a bit."
"Yeah, there is a lot of duplication in there." So I cleaned up my work with the following result.
I looked at the code in some astonishment. This really was a very simple algorithm! Why couldn't I see it before?
"You were right." I said to Jerry. "Now that I see this algorithm for what it is, it's kind of obvious. I guess choosing the right tests is pretty important."
"It's not so much choosing the right tests, Alphonse; it's about realizing that you are trying to solve the wrong test."
"Yeah, the next time I get stuck like that, and start guessing and flailing, I'm going to re-evaluate the tests. Perhaps there'll be simpler tests that will give me a clue about the real solution."
And then I stopped myself and asked: "Is that true, Jerry? Is there always a simpler test that'll get me unstuck?"
Jerry was about to answer when a spitwad hit him in the side of the face. Jasmine was laughing and running down the hall. Jerry lept out of his seat to chase after her.
I just shook my head and wondered.
Friday, September 24, 2010
The Hacker, The Novice, The Artist, and The Craftsman.
In my last blog "Too Small to Refactor", I made the statement:
I realize that I am making a Myers-Briggs type error. People cannot truly be classified using binary attributes. A person classified by MBTI as an introvert, certainly has some extrovert characteristics. By the same token a programmer who shows some attributes of a hacker, probably also shows some attributes of a craftsman.
Still, I think the definitions of these terms can be useful as a way to classify programmer behaviors.
What is the difference between the Hacker, the Novice, the Artist, and the Craftsman? It's all about their personal definition of "Done".
The Artist is done when the structure and behavior of the artist's code has achieved perfection in the artist's eyes. There is no consideration of money spent, or money to be earned. The artist does not care about ROI. The artist does not care how long it takes. The artist cares only about the final result.
An artist will spend hours, days, even weeks, on a single line of code. An artist will throw away whole functions and modules because they somehow don't feel right. An artist polishes, and polishes, and polishes in pursuit of some elusive goal of perfection.
The Hacker is done when the behavior of the code achieves some personal goal. The Hacker is not concerned with ROI. The Hacker does not care about the code at all. The Hacker does not care about how much, or how little, is spent creating the code. The Hacker does not care if anyone else ever uses the code. The Hacker only cares about making it work -- once. After that, the Hacker loses interest.
The Novice is done as soon as the code works "well enough". The Novice strives to minimize initial coding time. The Novice is not concerned about ROI. The future cost of the code is of no concern to the Novice. Nor does the Novice care about the number of hidden and/or subtle defects left in the code. The Novice simply wants to get to the next task as soon as possible. The Novice is driven by schedule; or rather, the Novice is driven by pleasing managers who are driven by schedule.
The Craftsman is done when ROI has been maximized. The Craftsman strives to be a good steward of the monies being spent. The Craftsman want to make sure that every dollar goes as far as it can, and earns as much as it can in return. Therefore the Craftsman makes sure the code works, and can be kept working with minimum extra cost.
The Craftsman understands that most defects in behavior and structure will be very expensive to repair later and very inexpensive to eliminate now. So the Craftsman pushes towards a very clean implementation. But the Craftsman also recognizes that some rare defects in behavior and structure are going to cost more to eliminate than to tolerate; and so the Craftsman uses judgment, acquired over years, to maximize ROI.
"Clean code has always been about money, and has never been about art."Apparently this took a few people by surprise. One person comented:
"...I thought we were talking about craft, and the cost-cutting as a subproduct..."So what is the difference between a craftsman and an artist? And just to add some spice, how do they differ from a hacker and a novice?
I realize that I am making a Myers-Briggs type error. People cannot truly be classified using binary attributes. A person classified by MBTI as an introvert, certainly has some extrovert characteristics. By the same token a programmer who shows some attributes of a hacker, probably also shows some attributes of a craftsman.
Still, I think the definitions of these terms can be useful as a way to classify programmer behaviors.
What is the difference between the Hacker, the Novice, the Artist, and the Craftsman? It's all about their personal definition of "Done".
The Artist is done when the structure and behavior of the artist's code has achieved perfection in the artist's eyes. There is no consideration of money spent, or money to be earned. The artist does not care about ROI. The artist does not care how long it takes. The artist cares only about the final result.
An artist will spend hours, days, even weeks, on a single line of code. An artist will throw away whole functions and modules because they somehow don't feel right. An artist polishes, and polishes, and polishes in pursuit of some elusive goal of perfection.
The Hacker is done when the behavior of the code achieves some personal goal. The Hacker is not concerned with ROI. The Hacker does not care about the code at all. The Hacker does not care about how much, or how little, is spent creating the code. The Hacker does not care if anyone else ever uses the code. The Hacker only cares about making it work -- once. After that, the Hacker loses interest.
The Novice is done as soon as the code works "well enough". The Novice strives to minimize initial coding time. The Novice is not concerned about ROI. The future cost of the code is of no concern to the Novice. Nor does the Novice care about the number of hidden and/or subtle defects left in the code. The Novice simply wants to get to the next task as soon as possible. The Novice is driven by schedule; or rather, the Novice is driven by pleasing managers who are driven by schedule.
The Craftsman is done when ROI has been maximized. The Craftsman strives to be a good steward of the monies being spent. The Craftsman want to make sure that every dollar goes as far as it can, and earns as much as it can in return. Therefore the Craftsman makes sure the code works, and can be kept working with minimum extra cost.
The Craftsman understands that most defects in behavior and structure will be very expensive to repair later and very inexpensive to eliminate now. So the Craftsman pushes towards a very clean implementation. But the Craftsman also recognizes that some rare defects in behavior and structure are going to cost more to eliminate than to tolerate; and so the Craftsman uses judgment, acquired over years, to maximize ROI.
Saturday, September 18, 2010
The Uncertainty Principle, and the Quantum Discontinuity.
What is the uncertainty principle? Some people think of it as the "Observer Effect", the fact that any time you measure an object, you change something about it. For example, to see where an object is you have to bounce photons of light off of it, and those photons change the object.
But that's not what the uncertainty principle really is. The Heisenberg Uncertainty Principle (HUP) is defined as follows:
That is, the uncertainty in position times the uncertainty in momentum is greater than or equal to Plank's constant over two. Plank's constant being 1.054571628(53)×10−34 J·s
What this means is that you cannot know both the position and momentum of something. To the extent you know one, the other is uncertain. Of course Plank's constant is a very small number, so most of the time the uncertainty of our knowledge does not matter. It only becomes significant over very short distances and very small changes in momentum.
To understand where this uncertainty comes from, we need to understand that all matter and all energy is composed of waves. I know this is difficult to envision, but take it on faith for the moment because once you accept this fact the uncertainty principle suddenly makes a great deal of sense.
Imagine a circular pond of very still water. The surface is like a mirror in all directions. Hold a stick in your hand, and insert the tip into the water. Move the stick up and down at a fixed frequency. Waves will ripple away from the stick in all directions, completely filling the pond. The waves have no certain position. They are everywhere on the surface of the lake. However, because the frequency is fixed, the waves have a very well defined energy (momentum). That's the first half of the HUP, we know the energy but we cannot determine any position.
Now change the way you move the stick. Shake it up and down randomly -- perfectly randomly! Make sure you incorporate every frequency into your motion. The surface of the lake will return to it's mirror sheen because all frequencies will integrate out. You will know where all the energy is, it's in the stick, but you won't know how much energy there is because it's all random. And that's the second half of the HUP. You know the position of the energy, but the quantity is purely random.
That's how photons work. Photons are "particles" of light. But does light really move as a spray of particles? It sometimes seems to. However, it also seems to move in waves like the waves of a pond.
Plank showed that the energy of a photon is equal to Plank's constant times the wavelength of the photon's frequency:
So now, imagine a light source that emits light at a fixed frequency at a rate exactly equal to one photon per second. Is this light source emitting one photon per second? Or is it filling the space around it with light waves. The answer is both. The space around the light source is filled with a field of waves. However, those waves cannot interact with any other matter more than once per second. And the position of that interaction is, by the Uncertainty Principle, random. So if you set up a screen around the light source, you'd see tiny little sparks of light in random positions at roughly one second intervals.
The waves are all there, filling space like the waves on the pond, but the energy of those waves can only be deposited in fixed quantities, and the position of each one of those deposits is random. If you put such a light source in the center of a room, it would "illuminate" that room. However, your eyes would only register the photons that managed to randomly deposit their energy on your retina via a pathway that passed through your pupil after reflecting off the objects in the room. And that would happen at a rate much less than once per second because most of the photons emitted by the light source would deposit their energy somewhere other than in your eye.
What is bouncing off the furniture in the room? Photons? No, it is the waves that are reflecting off the objects in the room and that are passing through your pupil, refracting through your cornea and lense, and "striking" your retina. And then those waves deposit their energies as photons at uncertain locations. The waves determine the probability that the energy will be deposited at one uncertain place or another. So in some very real sense the waves are waves of probability. If an area of the room is in shadow, no waves will be present in that part of the room, and so the probability that the waves will deposit their energy as photons in that area is zero.
If you put a camera in the room and left the shutter open for a very long time, the camera would record a perfectly normal image of an illuminated room. Over time the field of waves would deposit some of it's energy as photons in the camera's receptor. The probability information carried by those waves would cause those photons to build up an image of the room.
Indeed, that's what's going on right now as you are looking at these words. The field of waves leaving your computer screen carries probability information to your retina, causing photon's to be randomly deposited there. It's just that the flux of photons is so huge, that we do not notice their randomness.
What does this have to do with software?
Nothing directly.
But that's not what the uncertainty principle really is. The Heisenberg Uncertainty Principle (HUP) is defined as follows:
That is, the uncertainty in position times the uncertainty in momentum is greater than or equal to Plank's constant over two. Plank's constant being 1.054571628(53)×10−34 J·s
What this means is that you cannot know both the position and momentum of something. To the extent you know one, the other is uncertain. Of course Plank's constant is a very small number, so most of the time the uncertainty of our knowledge does not matter. It only becomes significant over very short distances and very small changes in momentum.
To understand where this uncertainty comes from, we need to understand that all matter and all energy is composed of waves. I know this is difficult to envision, but take it on faith for the moment because once you accept this fact the uncertainty principle suddenly makes a great deal of sense.
Imagine a circular pond of very still water. The surface is like a mirror in all directions. Hold a stick in your hand, and insert the tip into the water. Move the stick up and down at a fixed frequency. Waves will ripple away from the stick in all directions, completely filling the pond. The waves have no certain position. They are everywhere on the surface of the lake. However, because the frequency is fixed, the waves have a very well defined energy (momentum). That's the first half of the HUP, we know the energy but we cannot determine any position.
Now change the way you move the stick. Shake it up and down randomly -- perfectly randomly! Make sure you incorporate every frequency into your motion. The surface of the lake will return to it's mirror sheen because all frequencies will integrate out. You will know where all the energy is, it's in the stick, but you won't know how much energy there is because it's all random. And that's the second half of the HUP. You know the position of the energy, but the quantity is purely random.
That's how photons work. Photons are "particles" of light. But does light really move as a spray of particles? It sometimes seems to. However, it also seems to move in waves like the waves of a pond.
Plank showed that the energy of a photon is equal to Plank's constant times the wavelength of the photon's frequency:
So now, imagine a light source that emits light at a fixed frequency at a rate exactly equal to one photon per second. Is this light source emitting one photon per second? Or is it filling the space around it with light waves. The answer is both. The space around the light source is filled with a field of waves. However, those waves cannot interact with any other matter more than once per second. And the position of that interaction is, by the Uncertainty Principle, random. So if you set up a screen around the light source, you'd see tiny little sparks of light in random positions at roughly one second intervals.
The waves are all there, filling space like the waves on the pond, but the energy of those waves can only be deposited in fixed quantities, and the position of each one of those deposits is random. If you put such a light source in the center of a room, it would "illuminate" that room. However, your eyes would only register the photons that managed to randomly deposit their energy on your retina via a pathway that passed through your pupil after reflecting off the objects in the room. And that would happen at a rate much less than once per second because most of the photons emitted by the light source would deposit their energy somewhere other than in your eye.
What is bouncing off the furniture in the room? Photons? No, it is the waves that are reflecting off the objects in the room and that are passing through your pupil, refracting through your cornea and lense, and "striking" your retina. And then those waves deposit their energies as photons at uncertain locations. The waves determine the probability that the energy will be deposited at one uncertain place or another. So in some very real sense the waves are waves of probability. If an area of the room is in shadow, no waves will be present in that part of the room, and so the probability that the waves will deposit their energy as photons in that area is zero.
If you put a camera in the room and left the shutter open for a very long time, the camera would record a perfectly normal image of an illuminated room. Over time the field of waves would deposit some of it's energy as photons in the camera's receptor. The probability information carried by those waves would cause those photons to build up an image of the room.
Indeed, that's what's going on right now as you are looking at these words. The field of waves leaving your computer screen carries probability information to your retina, causing photon's to be randomly deposited there. It's just that the flux of photons is so huge, that we do not notice their randomness.
What does this have to do with software?
Nothing directly.
Friday, September 17, 2010
Too Small to Refactor?
In John MacIntyre's second blog about Clean Code he presented a very simple little payroll calculator, refactored it, and then asked whether it was truly worth refactoring in an ROI sense. His conclusion was that he would probably not refactor it in a "real" project, but probably would refactor it if it were his own project. His reasoning was that the refactoring was worth doing for artistic reasons, but not for financial reasons.
This argument suggests that Clean Code is about art rather than money. This is a fundamental flaw in logic. Clean code has always been about money, and has never been about art. Craftsmen keep their code clean because they know that clean code costs less. They know that cleaning code has a one-time cost, but that leaving it unclean has a long-term repeating chronic cost that increases with time. Craftsmen understand that the best way to reduce cost and increase ROI is to keep their code very clean.
Here was the code that John began with (I've translated it from C# to Java for my own sanity.)
If we are going to refactor this, we're going to need some tests. So the first thing I did was to write enough tests to cover the code.
The algorithm was a little bit wordy, so I shorted it up a bit and made the two sections of the if statement independent of each other.
Next I got rid of that boolean argument. Boolean arguments are always troublesome beasts. Some poor schmuck is bound to call it with the wrong value, and all the poor people reading this code will wonder whether they should look up the argument to see what it means. Boolean arguments loudly declare that this function does two things instead of one thing.
This had a profound effect on the tests. The tests look almost like they are using two derivatives rather than two instances of the same class. Indeed, we should probably continue pushing the refactoring in that direction. Creating two derivatives is simple enough. First I changed the tests to create instances of the derivatives, and then I wrote the derivaties themselves.
That sets things up nicely. Now I just need to push the calculate method down into the two derivatives.
Nice! Now all I need to do is refactor the two derivatives.
Now that's nice! Nearly the same number of lines of code as the original, and so much cleaner! But was it worth it?
Of course it was! The two business rules have completely decoupled from each other. They are in totally different files, and know nothing about each other. If someone adds a new kind of pay calculation, like a SalariedCalculator, none of these files will need to change! (We call that the Open Closed Principle by the way.) Think about what we'd have had to do with the old implementation! Boolean's don't split into three very well.
Yes, this was worth it. It was worth it because we've all been impeded by bad code before. We all know that bad code slows down everyone who reads it, every time they read it! Bad code is like Herpes. It's the gift that keeps on giving.
This argument suggests that Clean Code is about art rather than money. This is a fundamental flaw in logic. Clean code has always been about money, and has never been about art. Craftsmen keep their code clean because they know that clean code costs less. They know that cleaning code has a one-time cost, but that leaving it unclean has a long-term repeating chronic cost that increases with time. Craftsmen understand that the best way to reduce cost and increase ROI is to keep their code very clean.
Here was the code that John began with (I've translated it from C# to Java for my own sanity.)
If we are going to refactor this, we're going to need some tests. So the first thing I did was to write enough tests to cover the code.
The algorithm was a little bit wordy, so I shorted it up a bit and made the two sections of the if statement independent of each other.
Next I got rid of that boolean argument. Boolean arguments are always troublesome beasts. Some poor schmuck is bound to call it with the wrong value, and all the poor people reading this code will wonder whether they should look up the argument to see what it means. Boolean arguments loudly declare that this function does two things instead of one thing.
This had a profound effect on the tests. The tests look almost like they are using two derivatives rather than two instances of the same class. Indeed, we should probably continue pushing the refactoring in that direction. Creating two derivatives is simple enough. First I changed the tests to create instances of the derivatives, and then I wrote the derivaties themselves.
That sets things up nicely. Now I just need to push the calculate method down into the two derivatives.
Nice! Now all I need to do is refactor the two derivatives.
Now that's nice! Nearly the same number of lines of code as the original, and so much cleaner! But was it worth it?
Of course it was! The two business rules have completely decoupled from each other. They are in totally different files, and know nothing about each other. If someone adds a new kind of pay calculation, like a SalariedCalculator, none of these files will need to change! (We call that the Open Closed Principle by the way.) Think about what we'd have had to do with the old implementation! Boolean's don't split into three very well.
Yes, this was worth it. It was worth it because we've all been impeded by bad code before. We all know that bad code slows down everyone who reads it, every time they read it! Bad code is like Herpes. It's the gift that keeps on giving.
Subscribe to:
Posts (Atom)