Thursday, April 26, 2012

The Education Crisis: When universities become a waste of time!

Controversy was sparked earlier this month when the University of Florida (UF) allegedly considered a proposal to eliminate the computer science department. While the UF officials maintained that this was not their plan and that they were only trying to make necessary cuts, others argued that the proposal was nothing short of an attempt to systematically close down the computer science program.

Whether the UF was actually planning to close the computer science program or not is really not what I am interested in. What really interests me is the fact that educational institutions - and especially technical programs - have increasingly become a topic of discussion as far as financial cuts are concerned. And whether we like to admit it or not, this is a strong signal that the overall sentiment towards universities is not that positive. At the end of the day, this is an economic debate, and it all boils down to the economic value of universities as perceived by governments and legislators. So it is only appropriate to ask: Are universities doing a good job from an economic point of view? More specifically: Are universities producing job-ready students who are able to contribute to the economy?

(1)
It is often argued that it is not the university’s job to prepare individuals for a job, but rather to give them the necessary soft skills to do well in their future jobs. Advocates of this school of thought promote the idea that universities are mainly there to teach students how to learn, how to do research, and how to work and communicate with others. On the opposite side of this debate, some argue that universities should provide students with relevant and practical knowledge that will allow them – once they graduate – to hunt a job quickly and excel in it. Universities, they continue, should be able to teach the students the things they will experience and have to deal with in a real job.

Universities themselves seem to be confused as to why they really exist. If they exist to give you practical knowledge, then they are definitely not doing a good job. Pick any university graduate and examine their readiness to take on a real job and you will be shocked. Ask an electrical engineering graduate who spent four or five years learning "electrical engineering" concepts to design an electric circuit for a small house and be darn sure not to come close to it - because chances are it would be a safety hazard. Unfortunately, graduates from other engineering disciplines as well as computer science and information systems are not any better. 

On the other hand, if universities are supposed to teach you how to gain knowledge – or to put it in fancier terms, universities do not give you fish but they teach you how to catch fish – then, once again, that is obviously not what they are doing. You can do a simple experiment by looking at the exams students have to pass to be able to graduate with a degree in something. You will notice that passing most exams is highly correlated with the ability of the students to recall the knowledge we have provided in class – and not their ability to research or/and gain new knowledge.

I argue that universities can do both but I am not delusional enough to believe that they already do.

Something is fundamentally wrong when we keep students for four to six years in a university program, and then tell them – with no regrets – that what they will see in a real work environment is nothing like what they have learnt or practiced at university. What a waste of time! Maybe then we should not be surprised when we know that new high school graduates are willingly opting for technical institutes or community colleges. There, you spend two years learning something that is actually useful in real life, and your chances are higher at landing a job once you are done. At a technical institute, they don’t waste the first few weeks of the semester telling you about the history of organic chemistry. They don’t teach you four different methods to do something, and then tell you that none of them is actually used in today’s practices. They teach you things that matter, things that are still relevant in this day and time, things that you will actually use once you graduate and start your job.
(2)
I fully understand and appreciate the difficulty of taking a hands-on approach to education while maintaining a level of abstraction to prepare students to become thinkers and self-learners. But I also think that putting students in a bubble of abstractions and non-relevance for four years – if at all they make it to the fourth year – is the main reason students are usually not ready to take on a real job once this bubble has burst.

We need an intervention.

And we need to start off by trying to overcome the denial problem we have. As educators and administrators of educational institutes, we need to acknowledge that there is a serious problem in the system and it better be fixed before it is too late. We need to admit that universities are not generally able to achieve the goals we hope they would. We have to ask why – in a considerable number of institutes in Canada – it is taking six years to graduate only 1 of every 3 students who enroll fulltime. Student dropout rates from technical programs are also skyrocketing at many universities. We need to submit that the problem may well not be the students themselves, but the system in which these students feel irrelevant, unachieved, or overwhelmed. We also need to realize that our universities may not be coping well with the ever changing demands of the job market, and that by itself is enough reason for not taking the university route. 

The other thing we need to do is study the curricula of colleges and technical institutes and learn – yes learn – how they manage to graduate students in two years who may land a job even before they go on stage to get their degree. It is truly mindboggling – and very embarrassing – when you hear stories of people (smart people) who actually had to do a one- or two-year program at a local institute after they had graduated from university so that they may be ready to compete for jobs in highly competitive markets!!
 
The third step is to revisit our hiring approach. We need to handpick educators who understand the value of giving relevant knowledge, who share the vision of job-ready students, and who know how to make it happen in the classroom. We need to put more effort into making the classroom more interesting, more hands-on, and intellectually rewarding. At the same time, we need to set standards for our educators. We need to live in peace with the fact that not every academic is a good instructor. We should throw the ‘a-good-researcher-can-also-teach’ philosophy out the window, and instead open the door for creativity in the classroom and reward it. Instructors should be in the classroom because they want to be there, not because the university forces them to be there (Footnote: most universities require that all professors teach a certain load of courses per year).

(3)
A few years back, I watched a video of a professor at MIT who installed a pendulum in the classroom and actually rode the ball at the end of the string. And while the string was swinging back and forth, you could see the instructor going back and forth with it. The scene was hysterical. The students were laughing in disbelief as the instructor was swinging back and forth and his stopwatch was counting the seconds. The instructor proved to the students that the time he spent on the swing was the same time calculated by the formula written on the blackboard, and then he concluded his act with a very proud statement: “Physics works. I’m telling you.” At that point I envied those students. I so wanted to be in that classroom sharing the hype and enjoying the science. The science I can understand and relate to. The science that sticks with me forever, not the science I can only store in my brain for a few months until I have the opportunity to throw it up on an exam paper. Having watched this amazing lecture, I asked myself: if we had a similar learning environment at all universities, would we have any student in the classroom that would be yawning, or facebooking, or youtubing or impatiently staring at their watch? Would any student leave the lecture not fully understanding physics? Would any of our students not be so eager to attend a second and a third and a fourth lecture – willingly if I may add?

We would not be exaggerating if we described the current situation of our universities as an education crisis. A crisis that is very expensive because - as cliché as it may sound- it touches the very essence of what makes civilizations flourish and advance – the young minds, the builders of the future. But I also believe that we are able to get past this stage. And we can, with enough societal and governmental momentum, steer our educational system in the right direction. A better education system means a readier workforce. And a readier workforce means less post-graduate training and less waste of time and money. A better education system is our ticket to a healthier economy and a more promising future. Maybe only then, a proposal to eliminate a computer science program will sound completely outrageous and not even make it to the round table.

Friday, April 20, 2012

Is it just me or is this door stupid?

Have you ever tried to open an exit door like the one in the picture and pushed the wrong side of the handle? Bad news is it feels very stupid when you can't open a simple door from the first attempt. Good news is you are not alone!
http://i01.i.aliimg.com/photo/v0/106222079/FIRE_EXIT_DOOR.jpg
Until a very recent point of time, cognitive psychology had promoted the idea that humans are reactive creatures who behaved according to a stimulus-response mechanism. For example, when you see a 'Press any key to continue' message, your react (or respond) to the message by actually pressing any key. It's an immediate reaction to a sudden stimulus. This notion has significantly influenced the design of things around us, computer devices and software included. To illustrate this in more depth, let's go one decade back in time. Remember the good old Nokia 3310?
Nokia 3310 blue.jpg
Nokia 3310 - http://upload.wikimedia.org/wikipedia/commons/thumb/3/31/Nokia_3310_blue.jpg/150px-Nokia_3310_blue.jpg
Here is how you would send an SMS using Nokia 3310 (and most of other old devices):

1) You go to Messages - as a response to seeing the Messages icon on the screen.
2) You write the message - as a response to seeing the large text box and a blinking cursor.
3) Once you're done, you go to 'Options' - as a response to seeing the 'Options' button.
4) Then you hit Send - as a response to seeing the send option.
5) Finally, you key in the number of the intended recipient for the message - as a response to seeing the small rectangular box that says 'number'. (or you choose a recipient from your phone book).

You can see that every step in the process is based on the assumption that if a stimulus is strong enough to trigger some behavior, then the user will respond in the correct way.

For certain things, the stimulus-response mechanism has been employed very effectively - until today. For instance, red and green colors have been consistently used to signal unsafe and safe operations respectively. Almost every new phone uses the green color to 'Answer' a call and the red to 'Reject/Decline' a call.
HTC - http://htcevo3dtips.com/wp-content/uploads/2011/09/HTC-EVO-3D-answer-call.jpg
iPhone - http://www.filecluster.com/reviews/wp-content/uploads/2008/11/iphone_fake_calls.jpg
Nokia - http://dailymobile.se/wp-content/uploads/2011/06/incoming_call21.jpg
Nonetheless, the assumption that everything could be designed based on stimulus-response turned out to be overambitious. It is unreasonable to expect all humans to respond to a given stimulus in the same exact way. The burden is on the developer or/and the designer to ensure that the stimulus is strong enough so that most users may respond in the intended way. And unless the stimuli were somewhat universal (such as red and green), that was simply too much to ask. So what is the solution?

Well.. It turns out that humans are not only reactive after all. They are also intelligent proactive beings. Cognitive psychology turned upside down when we came to the realization that humans do not simply react to stimuli, but rather build expectations, anticipate events, and most importantly create mental models. And that is precisely why humans complain about things when they do not align well with their mental models. For example, before you open a water tap, you build a mental model of the path of the water. Once the water starts running, it should pour right into the sink. When it doesn't, you realize that something is off.
http://pic.epicfail.com/wp-content/uploads/2009/07/sink-fail.jpg
Now that is a very simple example, but it highlights a very important principle in the psychology of engineering. Once we absorb the concept of mental models, we will be able to understand why in the old days some people called call centers to ask where to find the 'any' button on their keyboard. We will understand why when many people see a big door handle, they do not know where to push. In their mental model, a door has to rotate around an axis on either side of the door. And unless this side is clearly marked (like in the picture below), it is difficult to know where you should apply the force to push. Needless to say, if you have to write instructions for opening a door, then maybe your design is simply stupid.
Don Norman & The Design of Everyday Things
At this point, you should also be able to explain why new phones handle sending messages the way they do. Using an iPhone or virtually any new mobile device (including Nokia devices), you will notice that the process of sending a message is pretty much the same except for one major thing. Can you guess what has changed? 


The key difference is that with newer models steps 2 and 5 are swapped. You no longer type the message and then choose the recipient. You do it the other way around. Because, according to our mental model of communication, we almost always think about the recipient before we think about the exact content of the message. "I want to text Sam to see how he is doing," "I want to tell Sara that the meeting is cancelled," "I will tell everyone about the party"...etc.

In the next post of the usability series, we will expand on this concept a little bit more and look at its practical applications in building user interfaces.

Tuesday, April 17, 2012

When "functional" is not good enough: How usability can make you fly or make you die..


In the late 1990s, IBM introduced the IBM RealPhone - a software application that allowed users to make phone calls from their computers. The phone was called 'real' because.. well.. it just was. It looked real. It had a handset and a dial pad and a digital rectangular screen - everything you would expect on a 'real' phone. But did it feel real?
Standards be damned
http://homepage.mac.com/bradster/iarchitect/phone.htm
At a first glance, the idea seemed to be groundbreaking, especially with IBM promoting the product with slogans like “Welcome to the future; one without distracting windows and menu bars”, “If you can use a telephone, you can use this software”, “Novice users can use it immediately” … etc. With all these big promises being advertised, it was very embarrassing for IBM to find out very shortly after releasing the product that their RealPhone ranked first in the Interface Hall of Shame. It was found to be “violating nearly every aspect of proper interface design.” It was a big failure for IBM because - as found later - there were absolutely no usability tests before the product was released.

In the older days of computing, usability had been completely overshadowed by functionality. People were impressed with what computers had to offer - so much so that they did not care if it took them twenty steps to accomplish a given task, or if they had to wait for hours to calculate a number, or if they had to punch sixteen long queries to do a simple search. All what mattered was that the computer was able to accomplish the task. Of course, let alone other important factors such as the immaturity of input and display technologies.

Not long after this initial period of fascination, the story of computers started to take a drastic turn. The personal computer started to take its place in almost every household. We had mouse devices, keyboards, and colored displays that pretty much started a revolution in graphical user interfaces. Software builders began to think about interfaces more critically and some - like Steve Jobs - more aesthetically. Functionality was no longer the only concern. People started to take usability more seriously. And as competition heated up between different software rivals, end-users got spoiled and very picky about the software they chose for document editing, graphics design, media playing ... etc. Rarely could any company get away with a poorly designed software just because it was functional. New domains started to emerge such as interaction design and usability testing. Gurus such as Jakob Nielsen outlined usability principles and design guidelines that proved to be very useful and practical for years to come.

We have to admit that we have come a long way from being in what I call the infant-fascination stage when we were wowed by the mere fact that computers could do something for us - just like when infants are fascinated by the mere fact that they have discovered their hands and feet. But we also need to recognize that, in this day and age, there are even more dramatic changes taking place at a very fast pace. With the advent of touch devices like the iPad and most smart phones, digital tabletops like Microsoft Surface, and the absolutely mind-blowing input technology such as Microsoft Kinect, we find ourselves puzzled by how the usability status-quo could possibly cope with these changes. Design principles that applied to mouse-based user interfaces cannot be applied as is to touch-based interfaces. A fingertip is probably thousands of times bigger than a mouse cursor or a stylus head. Windows, menus, scroll bars are only a few examples of all the things we need to revisit in order to build truly usable interfaces for orientation-agnostic devices such as digital tabletops.

The challenge of usability has always been there to stay. And especially nowadays, it seems to be where the competitive advantage is. If you build aesthetically pleasing devices (if you're building hardware at all), with intuitive user interactions and a simple interface, then your chances are undoubtedly higher at grabbing the lion's share of the market in any given domain. Apple's iPod is a prime example of that. Google's search engine is another example. Neither of these examples were the first-to-market in their domains. If anything, Google actually came very late to the market of search engines. The iPhone was not even the second- or the third-to-market in the domain of smart phones, but Apple still managed to sell a grand total of 100 million devices in a few years! I would actually go further to state that usability has now become a more important factor than functionality. Just think about the specs of the Apple's iPod compared to those of Microsoft's Zune. The latter was superior in many ways - but definitely not in aesthetics and usability. After generations of improved Zune devices, Microsoft could not get more than roughly 3% of the market compared to about 70% for Apple (50% of which were new customers - just to avoid going into the 'loyalty' argument). In 2011, Microsoft was finally courageous enough to put an end to the troubled journey of its Zune and pull the plug on the whole idea of competing with the iPod.
Now that we have established that usability does provide a business value (of course if you consider selling 100 million devices a business value), then we can talk about what it is you can do as a software practitioner or as a software company to not only build usable systems, but also promote a culture of looking at usability as a competitive advantage and even as a business niche.

Stay tuned for more posts on the "usability series"!

Tuesday, April 10, 2012

What makes a good interview question

Different interviewers have different styles of asking questions. Especially when it comes to technical interviews,  the interviewer usually has more freedom to decide as to what qualifies as a good question. In this post, I would like to talk a little bit about what I think makes a good question.

In the wide spectrum of interview questions, we can distinguish three different areas that interviewers like to test: fundamentals, technology-specific issues, and problem-solving skills. Depending on the interviewer's style, you may see more or less focus on all three areas. Let's take programming as an example. Questions about fundamentals (i.e. theory) involve topics like object-oriented programming, recursion, data structures... etc. Technology-specific questions on the other hand tackle things like garbage collection in Java, delegates in C#, pointers in C++, update and draw in the XNA Framework .. etc. Lastly, to test problem-solving skills, you see questions that combine the application of fundamentals in solving a relatively abstract problem such as finding all the possible moves in a chessboard.

So, the question is how much focus should we give each of the three areas to determine if the candidate is a good fit for the job (technically speaking)?

Fundamentals 
Fundamental questions are important because they tell you something about the background of the person at the other end of the table. If the person cannot tell you what a class is, or what we use inheritance for, then maybe they are interviewing for the wrong job. Having said that, I believe that we generally overdo it when it comes to this part of the interview. I honestly don't see the value of asking detailed questions like how the OS handles memory allocation and what the difference is between inner join and outer join and the like.. I know for a fact that many interviewers love to delve deep into such details but I think it is an utter waste of time, and even a risky approach. If an interviewee knows all the theory behind programming and databases, that does not tell you anything about whether they are a good programmer or not. It merely reflects their ability to recall information they read in textbooks. And using this approach you may run the risk of losing excellent candidates who do not necessarily have this information readily available in the back of their mind. I have seen many individuals who understand the theoretical concepts very well, but they simply cannot put labels on them. My 1-year-old boy just started to understand the physics of gravity. He knows that if he pushes my laptop off my desk, it will land on the floor (maybe in more than one piece.. but that's another law to learn). But does my son know that this is called gravity? Even long before Newton, people knew about gravity, and they used it in practical applications. But it was Newton who put a label on it and studied it in more detail. And it is really difficult to claim that no one before Newton understood physics because they didn't know the term 'gravity'.

To conclude this part, my recommendation is the following. Use some general fundamental questions as a quick filtering mechanism but don't go deep into any details.

Technology-specific questions
Avoid those at any cost. Unless you need to hire someone with a very specific experience in a certain technology, you want to try to stay away from language-specific or tool-specific questions. For example, if you need someone to maintain a SharePoint server, then they better know something about SharePoint. But if the scope of the job is not that specific, then what you should really be looking for is the ability for the candidate to learn on their own.

But how do you do that? Simple. Ask the candidate to accomplish a specific task using a technology they have never used before. Provide them with all the things that would be available in a real life scenario (e.g. API documentation, access to the internet .. etc). And see how much they can accomplish in a given time frame. You will be surprised by the vastly different levels of learning abilities of your candidates. This kind of exercise is also an excellent way to test their research skills, as well as their stress and time management skills.

In conclusion, what you want to avoid is rejecting a good candidate because they have been using C++ and C# instead of Java for the past five years.Or because they are used to an IDE to build and run their projects instead of a Linux command line.

Problem-solving questions
This should be the main focus of your interview. However, the objective of a problem-solving question should not be to test whether the candidate can reach the correct answer. Simply because if a candidate has heard or read about the question before, they would have a significantly higher chance of getting to the right answer than someone who did not. Also, how you phrase the question might lead different candidates into different directions. What you really want to look for here is whether the candidate is capable of analyzing the problem and communicating their thought process. You want to see if they are able to apply the fundamentals (e.g. recursion, arrays .. etc) to solve the problem at hand. Ask them follow up questions as to how they could improve a given solution, or whether there is a better way of thinking about the problem. Stimulate their brains, and don't hesitate to give hints or put more obstacles depending on how well the candidate is doing.

To summarize, this part of the interview should give you and the candidate a chance to communicate at an intellectual level. The onus is on the candidate to demonstrate that they are capable of using a number of leads to head into the right direction of thinking about the problem at hand.

Finally, as an interviewer with a hiring authority, you will find that it is very tempting for you to show off that you know more than the person in front of you. Interviewers usually like to have fun in interviews by asking all kinds of ridiculous questions and maybe discuss controversial topics. There is nothing wrong with that as long as we draw a clear line between having fun and getting the job of hiring the best candidate done.

Thursday, April 5, 2012

Technical interview questions for fresh graduates

Many of my colleagues and my former students have landed jobs in industry in the previous year. 2011 by all standards has been a good year for IT graduates. At least here in Alberta. Interview questions vary widely depending on the company/team/department/job title/interviewer and many other factors.However, based on my recent research and conversations with new hires as well as interviewers, I noticed a stable pattern in recent interviews. Interviewers do not seem to get more creative with their questions - maybe because they don't see the need to. Anyway, I will try to summarize some of the points you may want to know before you go for an interview these days.

There are three different types of what you would label as "technical questions". First, you have the straightforward, purely technical type of questions. For example:

- What is multiple inheritance?
- Describe Polymorphism.
- What are partial classes in C#?
- What are mock objects?
- What are delegates in C#?
- How do you implement multi-threaded applications in Java?

Of course, depending on your answer, you may expect follow-up questions like: What can go wrong with multiple inheritance? How can you go around it in Java?

The Official Dilbert Website featuring Scott Adams Dilbert strips, animations and more
http://dilbert.com/strips/comic/2003-11-27/

The second type of questions is supposed to test your problem-solving skills. At the end of the day, you might be able to answer all the theory questions above ('A' students should all do), but you might not necessarily know how to actually use these concepts in problem solving (not all 'A' students can). Here are a few examples from very recent interviews:

Q1. You have a singly-linked list. You do not have access to its head, but you do have a pointer to one of its items. How would go about removing that item from the list?
- The trick here is to understand what is 'exactly' needed. Many interviewees will quickly jump to thinking about how to delete the node that contains the item. Of course you will not be able to do that if you do not have a pointer to the previous node (or at least the head). Otherwise, you will break the linkage in the list. So what do you do? Shift the items one by one to overwrite the item you want to delete and then remove the node at the end of the list.

Q2. How would you model a Chicken in Java?
- Surprisingly enough, this question does not seem to get old. I used this example in one of my software engineering lectures. The next lecture, one of my students told me that she got the same exact question in her interview on that very day! She was very lucky.
There is really no one correct answer to this question. All you need to do is ask questions to understand what qualifies as attributes of the Chicken class (e.g. weight, age, body parts... etc) and what kind of behavior is expected (i.e. member functions like layEgg(), cluck(), getSlaughteredAndGrilledOnMyNewBBQ() .. sounds cruel I know. But be careful here not to do too much in one function. Maybe you should define one function to slaughter and another to grill).

Q3. In an unsorted array of integers, we have the numbers from 1 to 100 stored except for one number. How do you find that number?
- If the array is sorted, it is a simple traversal to check for arr[i+1]-arr[i] != 1.
- If the array is not sorted, then you can add up all the elements and use the mathematical formula to compute the missing element (The sum of the numbers from 1 to N is N * (N+1) /2)
- You can also use a HashTable to find the missing element unless the question specifies that you can't.

Q4. Without using a loop, how do you check if all the characters composing a string are the same? For example "11111" or "bbb"? You can use any built-in string method.
- Substring the given string from the first character until the one before the last. Then substring starting from the second character until the last. Check if the two substrings match.

s.substring(0,s.length()-1).equals(s.substring(1,s.length()))

- Take the first char. Use the replace all method and make sure the result string is empty.

s.replaceAll(s.charAt(0)+"", "")

- You can always use recursion if the question did not specify that you can't.
- If the language provides support for simple conversion, convert the string into an array, and the array into a set. Check that the produced set has only one element.

Q5. How do you efficiently remove duplicate elements from an array/list?
- You need to avoid using any algorithm that is O(n2) or more.
- Put them in a set and put them back in the array/list.
- The question gets very interesting when there are more constraints like not allowing extra storage, maintaining the order of the elements, not allowing sets.. etc.

Good luck!

Tuesday, April 3, 2012

Non-nullable value type error

The Problem
Working with XNA to develop a game for Windows Phone 7, I came across this issue of non-nullable values. What I was trying to do was simple. Given the coordinates of a touch position x and y, give me the color of the object at that position if any. Otherwise, return null.


        public Color getObjectColorAtPosition(float x, float y)
        {
            for every object in the list of objects:
                if there is a match with x and y
                    return the color of the object

             //since we did not find any match to the given x and y
             return null;
        }

But it didn't like it and I got the following error:
"Cannot convert null to 'Microsoft.Xna.Framework.Color' because it is a non-nullable value type"

You will get a similar error if you try to nullify Vector2,  GestureSample, TouchCollection, and of course many other types. For example, you will get a similar error if you try to do:

Vector2 vector = null; 

The Solution
To state the obvious, you need to change the non-nullable type into a nullable one. And this is how you do it:


        public Color? getObjectColorAtPosition(float x, float y)
        {
            for every object in the list of objects:
                if there is a match with x and y
                    return the color of the object

             //since we did not find any match to the given x and y
             return null;
        }

Nothing changed except the question mark next to the return type of the function. In a similar way you can declare Vector2 as:


Vector2? vector = null; 


But then if you want to access the content of the Vector2? object, you will have to do it through the Value field:

vector.Value.X = 4;
vector.Value.Y = 2;

Essentially what you are doing here is wrapping the non-nullable type within a nullable one. The longer method of doing this is by creating a Nullable instance:

Nullable<Vector2> vector = null;

Why is this happening in the first place? 
Well.. Ask Microsoft. But here is a quick glimpse. Some language developers think it is a better practice to not allow null as a sentinel value for struct objects or even class instances. This way, the object can be used anywhere in your program without having to verify that it is not null.

Others, including me, think that this is not something the language has to concern itself with. Leave it up to the developer to decide how to deal with null values. Or at least make it the other way around so that the developer has to specify (with a question mark) if an object should be non-nullable.

A funny comment I read on this topic: "what's next? non-zero integral types to avoid divide by zero?"

Monday, April 2, 2012

Does business value have to revolve around your customer?

No.

Once upon a time, a wise marketer said: "Revolve your world around the customer and more customers will revolve around you." Agile methods promote this notion of making the customer the center of attention. And one way agile teams have been trying to achieve that is through creating business value for the customer.

As I mentioned in my previous blog post, my observation is that many software practitioners believe that business value has to be somehow linked to the customer. I personally think that this notion is false and it is causing many conflicts in the way we plan and prioritize work items. Take refactoring as an example. Sometimes in the midst of an iteration, somebody in the team cries out loud for people to refactor their code. But what does refactoring give to the end user? Nothing. By definition, refactoring should not change the external behavior of your system. Therefore, your customer does not give a shampoo whether you refactored or not.

Another important example is when software platforms are involved. The work that is being done at the component level (i.e. services, search engines, filters ... etc) may not be directly visible to the customer. Component teams often need to spend time redesigning their components so that a wider range of feature teams may be able to use them in their product development. This redesign effort may not in any way create business value to the end customer. So should we abandon refactoring or redesigning altogether?

If you really think about it, the main issue here is the word 'business'. There is no disagreement that all of the above does create some 'value'. But to which 'business'? Some work items (e.g. new features) will create value to the customer's business. On the other hand, some other work items (e.g. refactoring) will create value to our business as a software company. Needless to say, maintaining and testing clean and well-refactored code is easier and faster than dealing with spaghetti code. So, refactoring does provide value to our business. It is just not visible to the customer. In the same way, having well-designed components makes it easier for more feature teams to use the components in their product development. Reuse, if done right, provides significant value. But again, not to the customer, but rather to us developers.

How we use the term 'business value' hugely affects the culture in the software organization. If we allow a looser definition of what business value is, teams will no longer be puzzled thinking about why on earth we need to refactor or produce regression tests or write a wiki or or ... Business value ought to mean more than just a happy customer.. Business value is also about making your business better.