The challenge of standards-making and adoption

I was reading this month’s Computer magazine and found an interesting article by Charles Rich entitled Building Task-Based User Interfaces with ANSI/CEA-2018. It sounded like an interesting UI abstraction so I went after the standard to look through it. And that’s when I was reminded of something: ANSI/CEA-2018 is one of those paid standards. If I want to read anything beyond the index of the standard, I have to pay $85.00 (or $63.75 if I was a member). Granted that if I was doing this for my job, $85 is not much, but as I was only doing this to investigate something that might be interesting to apply at work, I don’t think it’s worth paying anything.

The odd thing about this is that it feels like we learned that this is not the good model any more. Right now the model that seems to work the best is the open-source solution in which you provide everything for free and then let people pay if they want to get special attention from the design/implementation team.

Also, having an article on a widely available, research-oriented magazine, such as Computer, actually makes this dissonance even greater. Research publications are meant to make your research public, and allow other people to try it out, compare to their approaches or improve their systems based on your experience. When an article only provides an advertising to something, it ends up hurting my expectations of all the articles in the magazine. Why should I read through them if I won’t be able to really apply any of the things there without having to pay money?

One day we might be able to convince the whole industry that the pay-to-think model is not the right one (i.e., you have to pay to start looking around and considering whether the product is right for you). The software industry was the first one to realize it and change (first with free time-limited demos, and then with company-backed open source software). Standards should be the next one to do that. What is the use of a standard if it’s not visible for people to understand and apply, anyway?


Why use inheritance?

DISCLAIMER: if you are actually hoping that this post will help you answer this question, please stop. Go to some authoritative source to find the official answer to this question. I don’t quite agree with it, but who am I?

The other day I was preparing to interview an interesting candidate. This candidate had had other interview events before, so I was reading through them, when I was shocked by one note. But before I write this note, I have to observe that the two protagonists of this exchange were “experienced” Java programmers (2+ years of experience):

Interviewer: Why use inheritance?
Candidate: mumble… mumble… Because it reduces repetition between classes… mumble… mumble
Interviewer: Good! Let’s move on…

And he wasn’t kidding about the “good”. This is not an exactly transcript of the interview, or really the exact transcript of the approximate transcript of the interview (the interviewer notes). But it really worried me that at this day there is still people with more than a few years of programming experience that still think that class inheritance is just a mechanism to reduce code repetition. Especially when we are talking about a language that does not support multiple inheritance (I’m not claiming that this is right when you have multiple inheritance, but it’s even more wrong when you don’t).

It’s one of my pet peeves when people start creating very deep class hierarchies so that they can hang their “utility” methods throughout their classes without having to have explicit references to a utility class, or without having to do static imports (which I think are extremely confusing sometimes). It annoys me because I have seen people shooting themselves in the foot so many times because suddenly they need to have a real inheritance with a class they don’t have control over and now they are stuck because they can’t have multiple inheritance in Java.

I tend to be very strict about when I use inheritance. If I’m in doubt that I can say that A is-a B, even thought it will save a few characters here and there if I use inheritance and it’s very unlikely that A will diverge enough to require another parent, I just end up using containment instead of inheritance. Yes, it’s more verbose, but will keep your code sane in the future. And navigating through contained classes is generally easier than inherited methods, because if you don’t have CTRL-click on the IDE to help you figure out where the implementation of the method you are calling is, it does take much more time to find it. Not only it is time consuming, but you could be getting it wrong by missing and override somewhere in the class hierarchy.

So, why use inheritance? Polymorphism is one of the answers, although not all OO languages support it. The other answer is to improve understandability of your classes. If two classes have part of their state stored in a common parent, it should mean that that part of the state is semantically compatible (and I can write a long critique about the fact that sometimes this is not true and it drives me crazy). Containment does not generally provide this same compatibility, because things can be contained in different contexts. Thus it becomes harder for people that are analyzing the code to really tell how to interact with a set of classes.

Challenged by real-world ontologies – recipes

One of my apparently never-ending projects that I’ve spent a lot of time thinking about lately is how to build a system to represent recipes (like, cooking recipes). At first it seemed like a problem that wasn’t that hard to solve. No deep hierarchies, no complex multiple parents and constraints. However, this was an illusion. As I started with the modeling, using OWL and Protege, I quickly realized that I was hitting the old challenge of all ontology-based systems I’ve worked in the past: what is a type and what is an instance.

When you look at a problem in an abstract sense, it’s not too hard to draw a line on where you will set as being a class and all the rest are instances of the class. For example, the classic Noy and McGuinness Wine Ontology. In this case they work on varietals and characteristics and that makes it all classes. Then the actual wine bottle is an instance. Easy enough? Well, until you try to get to more interesting things like properties.

Let’s say that you want to represent a winery and provide information about the wines they produce. So far it seems easy: a winery is an instance and it has an object property that will connect it to the instances of the wines it produces. But now let’s say that you want to provide information about the type of wine it produces. Type, so now you are talking really about a relationship between the winery and actually a class of wines. Which now means that if there is a winery with only one wine, now this wine also has to be a class of wines. So far, almost good. You can say that the range is a class, but I’m not aware of a way of saying that the range is a class that is a subclass of Wine. The only solution is to go back to calling it an instance and use the single class instance pattern. But then how can you relate the wine to their specialty in the ontology?

That’s just the beginning of it. Now let’s say that you want to represent wine pairings. Some types of wine go well with types of food. That relationship is actually on an instance of type of wine to an instance of type of food. Now you have again two elements for one single concept. And this will keep happening and the cost for the person that is trying to maintain it is quite large.

Now for my specific case, representing recipes. The problem or recipes is that most of them are actually defined as a class of recipes. Relating back to wines, a recipe can call for a dry white wine. There are many dry white wines out there and each choice will actually generate slightly different results in the recipe. The only “complete” way I was able to represent something like this is to define it as a class and then have each ingredient be a type of object property, which now explodes exponentially the number of types of elements that exist, which makes it pretty much impossible to maintain manually. If it’s not manually maintainable, I’m not sure it’s that useful.

So, how to solve this issue? That’s a very good question. I’m not sure I have the answer for it right now, so I’ll defer it to a future post. I’ve started looking at rule-based constraints, instead of class-based, and they do help some, but not enough. There is still a need to increase the fluidity of classes and instances, but it’s complicated to define “recursive” metadata that is applicable to itself and still keep inference bounded. What is the use of creating a language and not being able to actually benefit from its semantics?

Anyway, I probably should go back to reading research on the subject. I think I’m missing something important somewhere.

Just because I mentioned OpenCalais… Now a Twitter mashup

I can’t say I’m too impressed by this, but it might be because I don’t know much about Chicago politics. But mostly because I just mentioned OpenCalais, I felt that I had to post this too:

The buzz on Twitter on the Illinois Special Elections in the Fifth Congressional District

It’s probably a simple search on Twitter that subsets entries that could be related to the candidates and then they run then through OpenCalais to see if any are actually identified as the politician. Then they tally the results. Simple, but a step above the “google solution” of just doing search keyword timeline. I hope more people realize that search is great, but it’s very weak without human “manual filtering”. So building any automated trends based on keywords is bound to give you misleading results.

Puzzling OpenCalais

I’ve always been trying to follow the developments of OpenCalais. It’s a very interesting project on making it available for people to annotate elements in free-text files that can be used to relate things together without much manual work. The interesting thing is that every time I try it out, I’m surprised by some positive and negative things. Here is an example:

I’ve posted a MarketWatch article: These 13 ‘tipping points’ have us on the edge of a Depression

In general it does a pretty good job, in general. Especially on identifying people and some places. It even tries to tie facts together and even do some anaphora resolutions (finding who “he” in the phrase is referring to). It’s not too smart about it, though. For example, on the paragraphs:

So can you trust them to have a magical formula for predicting the next “tipping point,” the next “Black Swan” in our future? No. The correct answer is (c), Warren Buffett’s answer. Here’s why.
Background: First, Wall Street’s narrow equations always leave out key macroeconomic data. Always. They cannot handle “big picture” issues. Their formulas are what mathematicians call “indeterminate equations,” with an infinite set of solutions. Guesses. So Wall Street invariably ignores big-picture issues that lead to meltdowns. Meanwhile, they get rich playing with your money.
Second: The Buddha would call Wall Street’s mathematical problem, a Zen koan, an impossible question. And he‘d warn you to: “Believe nothing, no matter where you read it or who has said it, not even if I have said it, unless it agrees with your own reason and your own common sense.”

It decided that the “he” (added in bold for people to find it more easily) referred to Warren Buffet, 3 paragraphs before, and not to the Buddha, which was mentioned in the same paragraph, but not identified as a person.

Other oddities:

  • It thought that Chavez was a city and not a person
  • It called the metaphor “Grand Obstructionist Party” as an organization. That is and advanced interpretation!
  • Depression as a medical condition
  • It missed the first name for author Nassim Nicholas Taleb, thus missing the connection of “author”, which it doesn’t miss for Malcolm Gladwell

It does a pretty good job at what it was initially built to do: identifying phrases like “Henry Kaufman, former vice chairman and chief economist at Salomon”. Also identifying the Fed as the Federal Reserve and Dow as Dow Jones.

I also did like the addition of the identified “Industry Terms”: bank bailout (shows that they are up-to-date on modern tendencies), printing money, and shadow banking system, but why “telecommunications”? Is this term that technical?

Anyway, it’s easy to see as a human that things are wrong, and as an NLP-enthusiast how things could be improved (especially from my metadata background knowing that it would be very easy to catalog all authors from popular books), but I can’t ignore the fact that they’ve done what nobody has really tried before: put entity and even relationship extraction in production for anybody to use and criticize. Right on the theme of my latest resolution: whatever you do is worthless unless you put it in production for everybody to see.

Software engineering vs. computer science vs. ?

It’s interesting that from time to time I have a discussion with a co-worker about the difference between a software engineer and a computer scientist. And I always get out of this conversation convinced that I’m actually neither. But before I talk about myself, let’s go for a short definition both (note: this is not a dictionary definition, it’s mine – so feel free to disagree):

Software engineers:

  • Have system in mind when building code
  • Tend to optimize for to reduce deployment risk
  • Decompose the problem into customer-requested features
  • Enjoy the challenge of design for integration

Computer scientists:

  • Have algorithms in mind when building code
  • Tend to optimize to reduce number of cycles to solve the algorithmic problem
  • Decompose the problem into algorithms that need combining and solving
  • Enjoy the challenge of complex problems that get them closer to proving that P != NP

Most software developers are know cannot be 100% characterized into any of those definitions. It’s dependent on the project and their background. For example, if you don’t know much of how to design a highly scalable multi-host system to generate unique numbers, you end up focusing your solution on making your internal code be clean, with the least amount of repetitions and separation of the algorithms (for example, you might think that i++ is too slow so you create a specialized code that does that much more efficiently).

So, when trying to categorize myself, I realize that, while I do display behavior that can be characterized as one of the two classes that I’ve described, I think there is probably a class missing:

Engineering scientists:

  • Have system vision/theory in mind when writing code
  • Tend to optimize to make the code look more like their system, even when it means using an algorithm that is much more complex than needed for the current problem
  • Decompose the problem into pieces that look more or less like the final theoretical system
  • Enjoy the challenge of, after all the added complications, still being able to finish a product

That’s about how I would characterize myself. I make no claims that I’m a good coder. I do care about the code I write, but I think I care much more about this vision of what should the world be if a system was built to solve all past, present and future needs for all my customers that sometimes being stuck to writing code is painful.

The Zune bug

I’m not sure if it’s really true, but it comes from a reputable source:

Cause for ZUNE leapyear problem

It’s kind of a funny bug, actually:

year = ORIGINYEAR; /* = 1980 */
while (days > 365) {
    if (IsLeapYear(year)) {
        if (days > 366) {
            days -= 366;
            year += 1;
    } elseĀ {
        days -= 365;
        year += 1;

I heard that it had something to do with leapseconds, but it actually was a simple problem with leap years. Very easy to fix too. And now I understand why Microsoft’s workaround was just to let it run out of batteries and wait until the day after the last day of the leap year and turn it on again (source). All you had to do is get out of this infinite loop for the last day of the leap year.

I’m so glad I’m not working on any software that gets deployed to devices in the wild that you can’t just press a button (or maybe a couple of buttons) and deploy a fix. If something like this happened to any of my software, somebody would be paged and we could find the bug, patch it and deploy it in less than an hour (depending on whether there are higher priority builds and deployments going on at the same time). Poor Zune-ers.

RSS My FriendFeed RSS

  • An error has occurred; the feed is probably down. Try again later.