Monday, April 09, 2007

Global Warming Heats Up

I hate it when the press starts talking about science. The press doesn't get science. Most science does not create sound bites that are useful. Of course the sound bites that are created are usually only useful to convince people of something that there is no scientific basis for. I thought I would throw some more information into the fray. I am not a climatologist, nor a scientist of any kind. Science is essentially a hobby for me. I am fascinated by how things work, and understanding natural systems. Add to this a general conservationist mindset and I am interested in Global warming, of course. But let's look at what we know about Global Warming.

Global Warming Facts
Is There Global Warming?
This is not really a question. I live in Wisconsin. One of the things everyone in Wisconsin has heard of is the Ice Age trail. The Wisconsin landscape was carved by Glaciers. There are currently no Glaciers in Wisconsin. Obviously the planet is warmer than it used to be, therefore global warming. This is the problem with answering this question. It's really the wrong question.

98% of Scientist Support Global Warming
Yea ,team, but who cares? I don't care what sociologists, political scientists, or archaeologists think about global warming. Don't get me wrong these are likely to be really smart people. It's just that they are not likely to know any more about the Earth's climate than I am. What I what to know is how many climatologists and meteorologist think that the Earth is warming in a way not explained by climate cycles. I also want to know where they think the temperature will end up?

Human activity has an effect on Global Warming
OK, this one is a great example of a bad scientific statement. Human activity produces CO2 (among other things). CO2 is a greenhouse gas and therefore hold heat within the atmosphere. Obviously, human activity has a contributing effect on global warming. The issue is how much? It is true that when you find your self in a hole you should stop digging, but are we digging with a teaspoon, a shovel, or a backhoe?

I'd love to give you answers to these and I will do some research, but I am here to tell you that the answers are not easy to find. In part because the press is talking so much about the wrong questions.

I'll post more as I can figure some of it out.

Monday, February 12, 2007

The Computer is Dead, Long Live the Computer

Finally the follow up to my February 2007 Post.

If you look up the definition for Computer you will find that it has a second definition that may surprise you. This is actually not the second definition, but the original definition of the term computer. You see a computer is "a person who computes; computist." So where did that come from? Well, back in the stone age before the personal computer or fire, accounting departments used to be full of people who would keep the books. These people were not accountants, an accountant is someone you hire to help you make sense of all the numbers. An accountant can tell you if you are making money and how much. This is often much more difficult than you might imagine. It is so difficult that we still don't have computer programs that will do it. Instead we have Excel.


Obviously Excel means that we no longer need computers (the human version) in an accounting department, right. True, but only sort of. It seems that though we no longer have rows of people with their heads bent tallying columns of numbers, we do need to have book keepers. When personal computers began to automate what people computers were doing we quickly learned that they had task that were easy and tasks that were exceptionally hard. By easy and hard here I am talking from the computers perspective, not the people's. Accurately tallying a column of numbers is difficult for many people. When you add in the need to do it quickly, and without error it is nearly impossible for most people. Computers find this very easy however. Other tasks that the computers (people not machine now) were doing are very difficult or impossible for a PC. Tasks such as deciding which category an entry belongs in. Maybe you can create a rule like all invoices from Krispy Kreme Doughnuts goes into business expenses, but these type of rules typically break down quickly. So even though the people in the accounting department are not doing sums anymore there are not that many fewer people in the accounting department than there were prior to the PC revolution.

Why do I bring this up? Especially in connection to software development. The issue is similar. Back in February of 2007 I wrote an article about a company called Intentional Software. I talked about the issues facing software development and why I didn't think Intentional Software was going to change the software industry. The trick is that they are trying to create a piece of software that makes software development easy, and software development is hard. Software development is not hard because we have to learn cryptic languages to be able to talk to the computer. Software Development is hard because it is process automation and processes are inherently hard. We are back to making all those fuzzy decisions like what category does this invoice go into?

I am currently working on a project to create a panel that is ridiculously easy to use. There are some particular challenges to this project as it is a computer that will be used on a shop floor. Keyboard and mouse are not completely out of the question, but it will have to work without them most of the time. This has lead me to think about what easy to use means. Here are my ideas.

The Interface Must be Apparent
This is why GUI programs are often seen as easier to use. They are more apparent. When people are working they do not want to go looking for things, they want to just do what ever they are doing. As shocking as this may be to computer programmers, most people do not enjoy spending time on a computer. Furthermore, most people find figuring out arcane steps to complete a process a bore and more than a little annoying. At any given moment the interface needs to make the option as obvious to the user as it can.

Actions Should Work with as Little Interaction as Possible
Every question that the user is asked is a distraction. When I ask my word processor to put today's date into a document, it should use the format that makes sense given my locale. It would be nice if I can then change the format. However asking me to choose the format every time I ask for the date is bad. The vast majority of the time I want the format that is the default for my locale (locale is language and country). Similarly when I save a file I should be asked where and what to name it once, not every time. After I have establish the files name and folder, I will rarely wish to change it. These decisions depend on the use case that is expected, but I think you see where I am going. Another aspect of this is that I should not have to go hunting through the menus to find something I want to do. If it makes sense in the current context then it should be available with one or two mouse clicks. Everything should be available through the keyboard. In the case of the project I am working on there needs to be a button on the touch screen.

Side Effects Should be Minimal
When I select an action I should not have to spend any time wondering what else this is going to do. Each command should be self contained. In this way the user can start to think in terms of steps to accomplish some goal. If too much is wrapped up in a single action then the user can become torn between what they want to do and the side effect they wish to avoid. Furthermore, having minimal side effects fosters the user to use the software in way that you did not expect.

Action Should be Complete
This is the balance to the minimal side effect rule. The user should not have to build what they consider to be simple actions. To save a file it would hardly make sense to ask the user to select the default directory. Then go through the menus and set the current file name. Finally to go through the menus again and choose save. In addition to being cumbersome, this makes it more likely that the user will miss steps or in other ways mess up the process. There is a balancing act between minimal side effects and complete actions that is often difficult to manage. However when it is managed properly there is a real sense of power in using that software.

Software must be Reliable
You may not think of this as an ease of use requirement. You may say all software should be reliable and I would agree. In this case, however, reliability does enhance the usability of software. The goal of easy to use software is to create an environment where the user can focus on their goals rather than on using the computer. Any time the software fails it interferes with this goal. Furthermore, it can cause the user to second guess themselves when they are attempting to accomplish something. They will wonder if this mouse click is going to cause the program to exit.

So what do we need to make it easier to program? We need an environment that does these things, but we also need an environment that can choose data structures and algorithms well. Yes I know you are an Object Oriented Programmer (OOP), so you don't use data structures or algorithms. Actually when you create an instance of that Set class you are choosing a data structure, and when you choose to use the Singleton design pattern you have chosen an algorithm. These decisions are difficult. They tend not to have pure right/wrong answers. They tend to be highly dependent on how other questions where answered. Furthermore they are dependent on how the process "should work". This is almost never know in a definitive way.

The short answer is that though software automation will continue to improve. Libraries will become larger and more powerful. Software Developers will not be wholesale replaced by machines any time soon. It is difficult to think about the things that need to be decided when developing software. Currently people still think much better than machines.

Tuesday, February 06, 2007

Unintentional software

In my last post I came down fairly hard on a company called Intentional Software as I said in the last entry I do not mean to single them out. I do think they are jumping a bit far. I think the next step up the abstraction ladder is Component Base Programming. This is neither my idea nor a new idea. Building software from reusable and cheap components has been the dream of software developers essentially from the beginning. So far the components are not cheap (most projects still build their own) or particularly reusable, but they still have many strengths. In my last job I had the experience of working on a project that was using Component Oriented practices. After a couple of years of work on the system a team of about 10 developers had created 78 discrete components. What was more amazing is that we had created 4 distinct (though related) applications. I had the experience of prototyping one of those applications. I was able to pull together several of the existing components, write one that combined and extended the functionality of three other components and wrap it all in enough code to create a functioning prototype of the new application in 2 weeks. It really blew me away how powerful this paradigm is.
This is what software developers have been looking for, except that we were creating our own components. We had also created our own framework for the components to live within. The framework incorporated both the services necessary for components (creation, version control, etc) and an inversion of control container. This allowed us to inject customer specific functionality into the architecture at the component edges. The project went on to be a success with about 3 months work to complete the first customer implementation. The next implementation took 7 weeks.
The "failing" of component oriented development is that this is still work being done by programmers. So it does not reach as far as "non-programmer" writing a system, but it is definitely a major step toward that goal. I am hesitant to call it a failing because it is a major step forward. So what is the next step toward non-programmer programming? Well as the question implies the problem is in understanding the term non-programmer.
If someone takes a set of actions that result in a program that does something, isn't that programming? I think, as I said previously, that the core of this goal is flawed. Someone who has learned enough to cause the computer to accomplish some goal is a programmer. The issue is not to remove the programmer, but to allow a business expert to become a programmer. The goal is to create an environment that is easy enough to use that the people closest to the problem can work on automating it directly. I did not say that the environment had to be easy enough to learn, but easy enough to use.

In my next post I will talk about the difference between easy to learn and easy to use. I will also talk about why Excel hasn't put more people out of work.

Monday, January 29, 2007

Everything Old is New Again

One of the aspects of having been involved with computers for 20+ years that I enjoy is being able to see how things keep coming back around. One story that keeps coming back is how "non-programmers" are going to be able to program one day. I just read a story in the New York Times about Intentional Software. For those too who choose not to read the article, Intentional Software is a company that is in the process of making a set of tool that allow "non-programmers" to define the intention they have for how the software should work. Then those intentions are rendered as code. Finally, the code is compiled into an executable.
Let me start by saying I think this is a great idea. Despite the fact that it may mean fewer software development jobs in the long run, I still think it is a great idea. Unfortunately it will not work. You see, it attacks the wrong problem. The article describes three advantages of Intentional Software's solution:


  1. The people who design a program are the ones who understand the task that needs to be automated.
  2. The design can be manipulated simply and directly, rather than by rewriting arcane computer code.
  3. Human programmers do not generate the final software code, thus reducing bugs and other errors.

The first advantage is simply not true. The idea that there is a domain expert that completely and thoroughly understands any business process is a myth. There are certainly people who understand some aspects of a process, but there is no one who understands the process completely. Furthermore, even if you could find one person who understands the process well, that person has an ideal implementation in mind. This is almost always not what the business is doing now. What the business is doing now is a result of the interplay of many people who have different ideas of what the business should be doing now. Some of these ideas are well thought out and the result of debate and/or experience. Others are a result of people wanting to get to lunch, or leave at a certain time. This mess is what defines the non-automated process. This is why collecting requirements for a software project is so difficult. Since there is no one who understand the entire process, many people are interviewed to collect the requirements. The requirements are collected based on how clearly each person can explain their views on how things should be done. They are also effected by how well each person can sell their ideas. Even if the system is implemented close to this description of the intent of the software, if the right people were not involved, the final system will be less than ideal. Of course there is also the problem that while the software is produced and tested the needs of the company are in flux as a result of many internal and external factors.

The next advantage is closer to the mark. "Arcane computer code" is what I do 5 days (sometimes more) a week. I find it neither arcane nor do I think of it as "computer code". It is difficult for some people to understand this, but in many ways I am more comfortable with say C# than I am with English. The advantage of an arcane computer language is that they tend to be very precise and limited. I know that the computer is going to do in most cases. It is true that sometimes I overlook or confuse things, but I do this much less often in C# than I do in English. The higher the abstraction, the more effort goes into understanding what is actually going to happen. Very high level languages have the issue that one must be familiar with the subtly of the language. There is a greater risk that my interpretation will not match the interpretation of the computer. Furthermore, high level abstractions tend to be very domain specific. Take an inventory abstraction for instance. Is it tracking the inventory in a vending machine, a retail store, a warehouse, a factory? Even within these distinction there are distinctions. Is the vending machine selling discrete items (i.e. candy bars, sodas, etc) or is it vending coffee (by the cup). Is the warehouse holding drugs, steel, food? Each of these variations must be represented somehow. Either there needs to be a huge third party ecosystem that creates each of these specialized abstractions, or the abstraction must be generic enough that it is adequate to cover all the possibilities. Adequate solutions are not good enough to differentiate your business, by the way. This is why most companies use software, after all. They want a competitive advantage. So they also need to be able to create their own high level abstractions.

This brings us to the final advantage. Since we have removed the "human" element from the process there will be fewer errors right? Not really. Firstly we have not removed humans we are using a different set of humans. If we have a high level abstraction that is widely applicable we will reduce errors, because we have a large number of people that are sharing the effort to find errors in code. This is why GUI tool kits actually do make programmers more productive. As you move the abstraction layer up you naturally are working with smaller audiences. The inventory abstractions I spoke of in the last paragraph show how high level abstractions must target a specific domain to be of real value. Even then, the parts of the software that really make a difference tend to be unique to a single product. That is why they make a difference. So now you are back to using software produced in house for one application. All of the advantages of having many people reviewing and correcting the software go away.

So where does that leave us? Software is hard because:

  1. No one really knows what it should do
  2. Software that give me a business advantage is different than software that other people are using
  3. There are technical challenges in creating software

Intentional Software's solution will help with item three, but does little or nothing to help with the first two. I don't mean to pick on them. There has been any number of technologies that where supposed to make software development easier that failed prior to them. If one looks at Delphi, Visual Basic, or an ancient Dos programming tool called Layout, they all had the same promise, and none of them changed the world. They all try to make the act of developing software easier, but if you look at item number one, it has nothing to do with the act of writing software, except that it is a prerequisite. Item two means that any sort of software factory won't really solve the problem either. Solving the third problem is helpful, but not enough to change the experience significantly. Software is hard for real world reasons.

To Paraphrase Albert Einstein, "Software Development should be as simple as possible, but not simpler".