danbricklin.com/log

Starting January 28, 2005
Being on the Gillmor Gang and more fiber stuff, A mischaracterization in the press, More on who does tagging, Microsoft releases their updated patent license, Systems without guilt where every contribution is appreciated, Adding fiber is lots of work and Bob's new essay
28Jan05-04Feb05
2005_01_28.htm
Friday, February 4, 2005 
Being on the Gillmor Gang and more fiber stuff [link]
Wednesday I got an email from Mike Vizard wondering if I had time the next day to be on the Gillmor Gang to talk about innovation. I accepted and then did a little playing with my calendar to make it work. So, yesterday at 5 pm along with the others I called into a toll-free number and answered questions and participated. It's quite different being in a podcast than listening to it! You want no noise, so no being out walking the sidewalk. I turned off the ringers on the phones in the room. (Doc Searls complained about cell phones with automatic gain controls that can't be turned off. He didn't explain but I guess that makes the background sounds like cars driving by loud while you're staying quiet to listen to others speak.) You have to pay careful attention (Steve Gillmor: There's that word you like, "attention"...). You can't press pause, you can't back up to listen again if your mind wandered thinking about what you are about to say, and there are people asking you something you may not know anything about expecting you to be as knowledgeable as they are (for example, I haven't used Flickr so I had to work around a direct question about it, nor have I used the new iPod Shuffle to comment on its UI from experience). You can't see people and with others only connected by a pay phone by the side of the road there's little way to raise your hand when you want to respond or get a clarification (some other podcasts are being done with Skype which includes a shared text chat, but I haven't been on one of those yet). On the other hand, it's nice to be part of something I love listening to, and now I feel even more "connected" to it (though that makes me even less like most of my readers now).

I had been thinking about a more global view of the effects of pervasive IP connectivity for a panel I'm going to be on for Infosys Technologies and we got to delve into that view. I got to tie it into teaching of evolution, open source, and lots of other things. Steve was even kind enough to remind people indirectly that I do consulting (I don't know if that was his intent, but thank you...).


Here are some pictures I took with the camera self-timer and a tripod to give you a behind the scenes look:

Dan holding phone seen from the side below  Dan leaning back in chair
Waiting for Doug Kaye to start the podcast recording, during the show listening carefully with glasses off
As was clear from the conversation, lots of people are well aware of the fiber to the home being installed in my town and my discussion of it. The crews are still out there every day for now. I asked one of the lineman what was happening. It seems that the first cable (that they put in a few months ago) was the feeder one from the main office. They now are doing the branches off to the poles outside all the homes. They are leaving up extra cable in coils awaiting the splicers who will come later. Then there are the other crews that put in a line to our homes and install the gear inside. I hear that it will be summer when it all works. We'll see.

Coil of cable and man in cherry picker
Verizon lineman pulling slack in fiber cable up on the pole in front of my house

Wednesday, February 2, 2005 
A mischaracterization in the press [link]
I just ran across an article that quoted me about Eric Kriss' announcement. It was in Mass High Tech and titled "State closes open source door, focuses on architecture instead". It starts out with this: "The Commonwealth of Massachusetts is backing off a controversial proposal to purchase only open source software and is instead moving to include proprietary software as long as there is an underlying open architecture to allow access to current information as technology changes." After some quotes from Secretary Kriss, it quotes me: "Local software executives were quick to agree. 'Everybody thinks this is reasonable,' said Dan Bricklin, chief executive officer of Software Garden Inc. in Newton and a longtime player in Boston's software industry. 'Of course you want to do this.'" Then it says: "Kriss' announcement capped a turbulent 18 months for the state's IT department. When he first declared in September 2003 that the state would strive to use only open-source software as a cost-saving measure..."

I read this with incongruity. What? The state was going to only use open source and then backed off? That's not what I thought it was. Did I think "this" is reasonable, with "this" seeming to mean a "backing off" from requiring open source; that I was endorsing their characterization of what the state was doing?

I don't know if it was the reporter or the editor, but there seems to be quite a misunderstanding about what the state is doing. The IT Acquisition Policy (from January 2004) says: "For all prospective IT investments, agencies must consider as part of the best value evaluation all possible solutions, including open standards compliant open source and proprietary software as well as open standards compliant public sector code sharing at the local, state and federal levels." So, that says both open source and proprietary software are to be looked at, no statement about "only open source". What about the January 2004 Open Standards Policy? That says: "All prospective IT investments will comply with open standards referenced in the current version of the Enterprise Technology Reference Model." Ah, that's it! New software has to comply with open standards, not "must be open source".

This has been a major source of confusion: Open Standards and Open Source are not the same thing. In fact, much of the popular proprietary software, like that from Microsoft and Adobe, support open standards (as defined by the state IT department) such as HTML, RTF, and XML. (Microsoft has been especially aggressive with support of XML and its relatives.) And even in Open Standards there wasn't a "backing off": The Commonwealth is adding to what they accept to include formats that meet their main objectives but don't have the restrictions of their original requirement of there being a standards body involved (so, for example, HTML could be supported before the W3C was formed).

I guess the distinction between Standards and Source Code is too much for some publications. They also sometimes get Revenue and Profit (or "making money") mixed up. This is sort of like mixing up John Kerry the senator and John Kelley the runner (which would be a big mistake in Boston Marathon country), or the Chevrolet Corvair (of "unsafe at any speed" fame) and the Chevrolet Corvette (which accelerates to almost any speed) -- hey, they're both something about GM cars.

This brings me back to blogs (surprise!). With blogs you often get to find out more about the writer and their background than most publications. It is more likely that you'll be able to find a blog written by someone really involved in a field or event and intimately knowledgeable about the distinctions between different terms. A statement in a blog about telecommunications from David Isenberg is more likely to be technically correct than an article from a reporter who usually covers "business". Bloggers also like to find authoritative sources, so even non-expert blogs may help because they've ferreted out the expert. This is back to the desire for better access to source material that Dan Gillmor and many others have been talking about.

This isn't just a trait of blogs, of course. Look at popular reporters such as Cokie Roberts who was brought up in a family in the middle of Washington politics or the NYTimes' Bernard Weinraub who was in the middle of Hollywood in many ways (as he relates in his final column). Blogs, though, let people who are full-time in a field provide coverage accessible to the rest of the world. Being able to understand a field well and then leave it and qualify for a full-time "professional" journalist job is uncommon.

Another thing about blogs vs. many news reports: Bloggers often get joy out of just letting you know a fact or observation. Things like: "I found a new device I really like", "Here's an interesting picture", "I figured out what the Senator meant". A lot of today's "professional" journalism is about showing conflict, people changing, horse races, etc. They are looking for some "you win/you lose" situation. In the case of the article I'm citing I guess it's: "State backs off" against "business". There are words like "rather than", "turbulent", "lobbied furiously", "more conciliatory", and finally "detailed explanation is now available" (implying  it wasn't there before -- untrue, these are 1 year old documents -- and perhaps implying that these are for people who want to dig further than they did...). If you want to find conflict, winners, and losers, you will. If all you want to use is a hammer, everything looks like a nail. You'll look high and low for nails and ignore the nuts and bolts. Reporters are honing their skills at finding conflict at the expense of the skills of deep understanding. Many of us bloggers are obsessed with understanding and feeling and communicating our experiences and what we find out. Let our readers figure out how to use it.
 
More on who does tagging [link]
Hmm, that last statement above, "Let our readers figure out how to use it", brings up a point to add to my post about guilt and tagging. Many people are pointing to Clay Shirky's observation that there is good, cheap tagging going on, but that it's done by readers and not authors. Bingo! (Over the years I've found that Clay is so good at pointing out some things that sometimes it makes me smile in awe.) That's my point: Tagging by authors is what's hard, error prone, guilt-ridden, etc. Google uses tagging (through linking and the words around it) to help do search. That's "reader tags" (readers of the thing linked to). Authors "tag" well enough in many cases by putting in words that describe how they understand something and search engines like Google have gotten pretty good at finding those words and other clues. The "other" words to search for are often not ones the author would think of or else they'd say it. Sometimes the "tag" comes about after the item is written and has to do with how it is received (I remember a reporter who was asked about "that" article by a CEO and he knew exactly what was meant) or what it is in relation to something else. In my Cornucopia essay's "Additional Thoughts" (at the end of the essay) I point to the observation that Napster had the added benefit of letting listeners name songs by how they remembered them instead of the "official" name and thereby added that additional value to the search database.

Unfortunately, there is work in the tagging area that involves tags in the original source of an item. That is the problem we are pointing out. I guess we're seeing that the original source is not a good place to require all tags. We learned that from Google and others. Clay shows that sometimes the tagging need only be done by those that like to tag. Not many people need to set the price of ketchup, even one would do.

Some of the "metadata" Clay talks about (people choosing a particular product over another being one vote for "this is better than that") is for sifting through and weighing (input into setting the price of ketchup). Google tells me which are the most "popular" items for a given tag. That works in the aggregate, but in search engine technology people worry about accuracy and reach. Some of the things we may want are only in one place (reach), so depending upon the accuracy improvement provided through aggregation may not be good enough when there's little to aggregate. That's where we want every author to tag every thing and to do it "correctly", but it's not to be, I guess.

Monday, January 31, 2005 
Microsoft releases their updated patent license [link]
As a follow-on to my posts a few weeks ago about the announcement from the Commonwealth of Massachusetts, here's some more of the story: Microsoft has posted the new version of their "Office 2003 XML Reference Schema Patent License" dated January 27, 2005, along with some comments. This is supposed to now be acceptable as an "Open Format" for government purposes. Let's see (in the view of this lay person) what changed.

There are three major changes:

First, the section defining terms (like "Necessary Claims") has been reworded some, but there's no effective change except those a patent lawyer might understand (and I understand those are for the good).

Second, the license is made perpetual, subject only to a "terminates if you sue us with your patent about these formats" clause (something common in even Open Source licenses). The old license did not say that it was forever, and could have been changed at any point. This is good.

The last change is the addition of these sentences: "By way of clarification of the foregoing, given the unique role of government institutions, end users will not violate this license by merely reading government documents that constitute files that comply with the Microsoft specifications for the Office Schemas, or by using (solely for the purpose of reading such files) any software that enables them to do so. The term 'government documents' includes public records." So, any program can be used to read public records (records made by government people in the course of their work and records filed by others for public inspection, such as SEC filings and court filings). Those programs don't have to meet the other criteria (such as displaying notices or fully understanding the other terms, some of which are ambiguous) for Microsoft to be barred from suing the user. What it doesn't do is allow a developer to develop such software if patent law could stop them. For example, the GPL doesn't allow some of the restrictions of the license (like the requirement of displaying a notice) and is therefore incompatible with the license and the license says it itself is incompatible with any license that prohibits its terms, so it is incompatible back. If you have such a program and use it, though, that use is OK (though I have no idea about the procurement of such a program). From a legal viewpoint, this is important. Practically, given the realities of what developers will do and other issues, this may be a net positive thing and have some effect even if it doesn't go all the way you would want. Again, something a lawyer would understand.

So what does this mean? The perpetual term is important because it means that they can't turn around and remove rights already given. This acknowledges that licensees need long term comfort and stops some forms of messing with this later (which has happened in the past with other companies and their licenses).

It's the government carve out that is most of interest to me, though. As I pointed out in my original post, Massachusetts was just pushing to meet their governmental responsibilities, they were not attempting to carry the whole world of software "freedom" on their shoulders. Did they meet their goal? What does this "say" on behalf of Microsoft?

In Eric Kriss' "official" transcript of his announcement, he defined "Open Formats" as being "...fully documented and available for public use under perpetual, royalty-free, and nondiscriminatory terms." (This is not a general term "Open" but rather a "Massachusetts ITD Definition" term. Don't confuse it with "Open" as viewed by others than the government.) By that measure, they have succeeded in getting the MS Office XML Schemas to be closer to an Open Format (it was royalty-free but now it's also perpetual), and, for "public use", an actual Open Format. But an Open Format for what? It is not an Open Format for creating word processing and spreadsheet documents. It is an Open Format for reading only in the least restrictive part of the license, so it is only as a publications or transfer format. It is not a working format for word processing, etc., if you want to allow "any" other program to be used. And there is a cloud over creating those "any" programs, and one class of "any" programs itself doesn't work with such licenses even if the carve out removes that restriction in the other way. The goal to guard against data lock-in appears to be met to some extent, but not in what I would think is a lay person's view of "nondiscriminatory" given the elephant in the room of GPL. Will this be enough for all governments to feel comfortable? We'll see.

The decision by Microsoft to only allow reading is telling. They make it clear by the need for a carve out for governmental use only (which is "unique") that this license is restrictive elsewhere. They also make it clear by not allowing writing that if you want to have a format for creating and working with documents and you want the option of a program under licenses that don't allow the conditions in the license (such as a notice and I don't know what else), you should find another format. Governments can switch to another format at any time with programs under any license that can read the MS formats and write the new, but those programs may not be able to be contracted to be written legally under certain licenses. However, once the files are converted to a suitable format any program can be used by anybody. Some of the allowed licenses are "officially" Open Source (but not all Open Source licenses may be used). The perpetual license means this conversion does not have to be done today.

I hoped for more. This is, though, a nice step in the right direction. The addition of the explicit perpetual license knocks down another obstacle to using it, removing a form of control, and means that there can always be programs (though maybe not under all licenses but under quite a few popular ones) that can read this format and convert it in the future. If you chose a Microsoft product to create the file in the first place for your company's internal use, you probably do not have a problem with a restriction on the range of licenses under which you can get a program to convert the data into another format (a new format from which you might never go back and which is compatible with all licenses). The Microsoft license is royalty-free and places little restrictions on programs, so that shouldn't add a cost component. If a document in MS Office format becomes a "public document", then the public is guaranteed the right to use any program to read it (if they have it).

I feel that the governmental carve out acknowledges in a very public way by Microsoft the need for complete freedom in moving data into new formats, albeit in just this one sphere. That sphere, public and governmental documents, is an important one since it is the source of documents for all, not just for those that decide to accept a product from a particular source. By singling it out, though, they are acknowledging that the general license isn't as open as they might want people to believe and that they do believe in lock-in. That's not a good signal to send the world. The fact that Microsoft it trumpeting how good this is shows they know people care.

I understand that even something this simple and seemingly minor was a lot of work for the people involved on both sides. Thank you for this step and thank you Massachusetts government people for trying (the Microsoft article says this is in response to them).

An historic note: For some reason, this reminds me of the old Lotus days when they turned to the courts to protect their character-based 1-2-3 spreadsheet franchise through compatibility claims. That was not about code copying but rather data compatibility (e.g., the macros). They won some and then eventually lost in the Supreme Court against a competitor that didn't matter very much. Lotus also lost in the marketplace that moved on to a functionally better product (Windows based Excel) when they didn't provide one themselves (they underinvested in Windows and pure-spreadsheets in the GUI environment). Microsoft has not turned to the courts, but they are invoking legal protection on a file format to keep a perceived competitor (GPL?) at bay on data compatibility. It's strange that of all things this is about word processing stored in XML. XML comes from SGML (the Standard Generalized Markup Language) which was designed explicitly for storing things like word processing. Looking at some reports, the level of invention here (in the protected schemas) appears not too earth shattering. It's not like inventing xerography or a new way to do writing with a computer. Why do they need to act like they are afraid of having a level playing field with your data?

An interesting side note to the telling of this whole story: In a press article related to this issue it says "In e-mails in Internet discussion sites, Kriss has said..." (in reference to a posting by Secretary Kriss on Slashdot, I assume). It also says "Pacheco said he still hasn't been able to discuss the issue with Kriss, but..." This is great! A very high ranking public official communicates through a blog on at least equal footing with his communications with a legislator (of the other party...) and the press acknowledges it. This is another sign of the acceptance of the blog form of communications. Note to public figures: When you communicate through a blog (preferably your own) you may have a better chance of being quoted correctly since what you say is very clear and there is a check. Press releases are not a substitute: They don't have enough quotes, and they're usually in the third person except for one or two stilted sound bites which is not good for reporters used to talking directly to someone.

Sunday, January 30, 2005 
Systems without guilt where every contribution is appreciated [link]
In reaction to, and support of, AKMA's post about tagging, Dave Winer writes that he stopped tagging the categories of blog posts. As soon as he missed one he felt guilty and then as the guilt grew he tagged less. He started just assigning things to a couple of categories and then not tagging at all.

I think Dave has pointed out a key problem with tagging. It seems like a nice idea but it requires us to always do it. The system wants 100% participation. If you don't do it even once, or don't do it well enough (by not choosing the "right" categories), then you are at fault for messing it up for others -- the searches won't be complete or will return wrong results. Guilt. But because it's manual and requires judgment you can't help but mess up sometimes so guilt is guaranteed. Doing it makes you feel bad because you can't ever really do it right. So, you might as well not play at all and just not tag.

This is the opposite of what I was getting at in my old Cornucopia of the Commons essay about volunteer labor. In that case, in a good system, just doing what you normally would do to help yourself helps everybody. Even helping a bit once in a while (like typing in the track names of a CD nobody else had ever entered) benefited you and the system. Instead of making you feel bad for "only" doing 99%, a well designed system makes you feel good for doing 1%. People complain about systems that have lots of "freeloaders". Systems that do well with lots of "freeloading" and make the best of periodic participation are good. Open Source software fits this criteria well and its success speaks for itself.

So, here we have another design criteria for a type of successful system: Guiltlessness. No only should people just need to do what's best for them when they help others, they need to not need to always do it.

Friday, January 28, 2005 
Adding fiber is lots of work and Bob's new essay [link]
All over the town where I live, Newton, Massachusetts, there are Verizon trucks. I mean lots of them, for weeks and weeks. As I write this there are three in front of my house pulling more fiber from the run down my street over to some homes on a little street up a hill. As I wrote back last October, Verizon is putting in fiber to the home in some communities around the country, and I live, Glory Be!, in one of them. Within a few months, hopefully, I'll have 15Mb down/2Mb up for about $45/month, with an (expensive) option for much more and the fiber to go to almost any capacity.

That's the good news. The bad news: This is a lot of work, and it's clear it will take a tremendous effort to put it in every community in the country. Good for the people who put up the lines. Maybe it is time for wireless.

Truck with cherry picker and orange cones  Lineman drilling into a telephone pole from a cherry picker  Truck  Two guys with a large reel of fiber cable
Scenes all around the city installing fiber to the home
Speaking of wireless, Bob Frankston has just posted a nice little piece on SATN. He shows in a new way the absurdity of a lot of the regulatory approach to "communications". He comes up with specific terms for the technology of radio wave communications in the old days (SFS and SHS) to make it clear that they are not the only way to use radio technology and to make it easier to contrast laws based on them with the technologies of today. A great quote: "SFS and SHS seemed wonderful in their time just as leeches seemed essential to 18th century medicine."


Archive   Home   Newer   Older
© Copyright 1999-2018 by Daniel Bricklin
All Rights Reserved.

See disclaimer on home page.