December 29, 2006

Time to update your feeds

Well, there had to be a snag, didn't there.

It seems that when you update your template in the new Blogger, it gets rid of your old feed, in my case the http://jdupuis.blogspot.com/atom.xml. In other words, the feed the vast majority of my subscribers use. You figure Blogger could maintain the old feed for simplicity's sake, or at least make it very apparent BEFORE you upgrade that you feed is going to disappear. That way, you could send out a message to let your subsribers know that it's going to happen while you still can. Of course, after the upgrade, it's too late.

In any case, the new feed is http://jdupuis.blogspot.com/feeds/posts/default.
The FeedBurner feed also seems to be still working.

I knew I should have done all my experimenting on the other blog.

Update: Either Blogger or Bloglines seems to have adjusted and picked up the feed with the old url. Never mind, everything seems to be ok now.

Update 2007.01.01: The feed here seems to be broken. I'm going to try and fix it, but I'm not hopeful. If by some miracle, you use that feed and happen to be reading this anyway, one of other feeds is a better idea at this point.

December 28, 2006

Welcome to the new look

Well, it's been 4+ years since I changed the template here, so I guess I was about due. I choose Minima Stretch, but I'm not sure about the stretch part -- that the posts take the full width of the screen rather than just a narrower part of it. I'll give it a few days to sink in then see what I think. If anyone has any thoughts on the new look and/or template, please chime in. I'm pretty design-impaired. As well, I curious about how the new Blogger template affects the various feeds.

December 22, 2006

Year end

I might have one or two more posts for today or tomorrow, then that'll be it for this year. I'll be back sometime during the first or second week of January. During the break, I'll be (finally) upgrading to the new version of Blogger next week and will probably do a test post or two, but those I'll just delete when I'm done. I may also review a science book or two on the other blog, posting links here, but that's probably the extent of it.

Looking forward to next year, some notes on what I have in mind, from a blogging point of view:


  • I'm pretty happy with my sabbatical posting frequency of 3-5 posts per week and will endeavor to maintain that rate. I also hope to maintain the current ratio of longer vs. shorter posts. There's been a bit of an increase in hits on the blog, most likely due to a combination of more thoughtful posts and greater frequency. The most recent My Job in 10 Years post was probably my most popular post ever, in terms of hits, although curiously it didn't seem to generate a lot of mentions in other parts of the biblioblogosphere leaving me to wonder how it got so many hits.

  • I hope to complete the My Job in 10 Years series in the January/February time frame. The remaining chapters will probably include posts on Instruction, Physical & Virtual Spaces, Outreach to faculty and students and some sort of Conclusion. I'm hoping that none of them get out of control like the Collections posts and end up at thousands of words.

  • I've had post bubbling under for several months now where I want to take a close look at Google Scholar and see how it could become a really good tool for general purpose research.

  • The conference (and conference blogging) travel picture is still uncertain. SLA in Denver seems likely, as does Computers in Libraries. Also the World Horror Convention is in Toronto in 2007.

  • I'm certainly attending the Ontario Library Association Super Conference, where I'll be presenting and convening. I definately plan to blog the conference. And since York is hosting WILU in 2007, I'm pretty sure I attend and blog that conference as well.


Lots of bloggers seem to be doing this year end summary meme, so I'll join it. What you do is copy down the first sentence for the first post of each month for the year.

January: A potpourri of bookmarks sitting around my desktop, many only vaguely related to our core mission here.

February:
There's lots of jargon and acromym-speak out there in the tech world and sometimes it can be hard to wade through it all, especially when something new and hot appears on your radar and you want ot get a handle on it.

March: Another long article comparing Wikipedia to Britannica, this time at Information Today.

April: From Marika Asimakopulos on PAMNET.

May: From The Infinite Matrix, Eileen Gunn's haunting Tour of Chernobyl.

June: As usual, O'Reilly has a good idea that other scitech publishers should take note of.

July: Brian Gray at e3 Information Overload is hosting the latest Tangled Bank carnival.

August: But I probably won't resume regular blogging for a few more days.

September: In the back-to-school spirit, via John Scalzi, a list of Top Ten No Sympathy Lines (Plus a Few Extra) by Steven Dutch, Natural and Applied Sciences, University of Wisconsin - Green Bay.

October: I must admit to getting addicted to all the A/V stuff going on out there in the science-y web.

November: From Ubiquity (all OA)

December: Two quickies for a Friday afternoon.

Conclusions? I seem to like doing laundry list posts at the beginning of the month, but me noting down my reading lists has always been an integral part of this blog anyway. Also, I seem to like to use links and quotes to start off my own posts and get my own thoughts going. Also not much of a surprise.

In any case, for those of you celebrating something this time of year (even if only a brief respite), enjoy.

Update: Added WILU to conference list.

December 20, 2006

Web 2.0, baah… it’s just the Internet!

That's the title of a recent post by Rob Knopp commenting on the recent We Are the World article in Time magazine. For those who've missed it, Time has named basically everyone as Person of the Year in recognition of the explosion of the interactive web. Since we all create content for the web, we're all People of the Year. Except Canadians, of course. Our Person of the Year is new PM Stephen Harper.

Well, I'm always a bit skeptical of these grand pronouncements, even though with three blogs I'm as much a person of the year as anyone, except for the Canadian thing, of course. And Rob Knopp is a bit skeptical of all the hype too. After all, it's just the Internet!

The idea is that in 2006, more than any individual, the growth and explosion of many individuals doing lots of small things has influenced our culture. Arguably, the most important visual media force in 2006 was online video sharing sites like YouTub, and arguably, the most important publishing phenomenon was the explosion of blogs (most of which are crap, a very of which have been groundbreaking).

The Internet is what makes a lot of this possible.

[snip]

The Web of today is an enhanced, improved, and accessorized, but not fundmentally different, version of the Web of Tim Berners-Lee 15 years ago. More than that, it’s the Internet of at least 1986 (the first time I became vaguely aware of it), and probably before that. There’s no revolution. There’s just more people using it, and more people realizing what it really is. The hurdle to get over was the overhyped dotcom Web of the late 1990s… that was where nobody “got” the Web or the Internet, and instilled in the popular consciousness a false idea about its nature.

[snip]

Here’s the idea of the Internet : every user is a peer. We can all be both information producers and consumers. We both send and receive. It’s a conversation, not consumption.

The Internet was designed this way from the start. You hear music and movie lobbyists talking about “Peer to Peer” software as if it were some recent evil created to destroy them, to corrupt the wholesome Internet with a new and terrible way of doing things. Well, no. Peer-to-peer is simply how the Internet works, no matter how many ISPs think it makes sense to have a “no servers” policy. In the 80’s and the early 90’s, many if not most of those on the web were academics, or students at Universities. Indeed, it was physicists who created the Web to share information. And those of us on the net back then got it. Yeah, not very many people were putting up web pages with a lot of information. However, the fraction doing that was much than the fraction of newspaper readers who wrote newspaper articles or published a newspaper.

Well, there's a lot more, but you can just read it yourself. It's nice to have a buzz word for everyone to rally around, but we shouldn't lose sight of the fact that the web has been around for quite a while now and will certainly be around for a lot longer. Let's work to make it a better, more interactive, more informative and even more anarchic place but let's not let the big media companies tell us what's worthy or what's hot.

December 19, 2006

Inkling has arrived!

The inkycircus dynamic duo of Anna Gosline and Anne Casselman have launched a new popular science webzine called Inkling. I particularly like their subtitle: "On the hunch that SCIENCE ROCKS!"

In the LabLit interview they describe the project:

Why did you decide to launch the magazine? What are its main goals and who is the target audience?

Why? Because we think there needs to be a new voice in science journalism and we think we'd have a lot of fun helping writers and artists provide it. The main goal is really to make science more approachable, more fun, something that people can relate to better. The target audience is everyone who is curious about science but mildly turned off the heavier publications out there today. And according to the big science magazine's readership stats, many of those people are women.


Which I think is a great idea, to have a fun, accessible magazine that women and men, boys and girls, can all enjoy and learn from. Is dumbing down a possible issue? Sure, but lets give them the benefit of the doubt and enjoy the first crop of articles -- they are all short and punchy, fun but with some good content. From the about page:
Inkling is an often updated magazine on the web dedicated to science as we see it. Founded in late 2006, we cover the science that pervades our life, makes us laugh, and helps us choose our breakfast foods. We aim to capture a larger proportion of female readers, but, of course, everyone is always welcome.

Some of the articles from the first issue, definately showing a bit of flare and sauciness not often seen in other science publications.
And a whole bunch of others, all well worth checking out. Email subscription and rss notification are both available for the 1.0 and 2.0 amongst us. via inkycircus which was via LabLit.

December 16, 2006

Write a book about your entire book stock and turn left at Albuquerque

Think of this post as a slightly late Friday Fun post...

Via the Information Literacy Weblog, I came across the mind-stunningly hilarious Library 2.0 Idea & Equation Generator!

The basic idea is to create a kind of comedy mashup of various silly and real ideas to create even more silly and, occasionally surprisingly interesting real ideas.

Check the comments out in the blog post for some more random silliness. I'll put a few of the ones I got below:


  • mashup communities of interest and apply a liberal amount of lipstick
  • evangelize about your patrons and call them a ''perpetual beta''
  • revitalize eBay and use it to replace all of your librarians
  • social networks + Netflix model = Second Life
  • Michael Gorman + Casey Bisson's WordPress OPAC = patron's privacy
  • Inter Library Loans + OPAC = Library 2.0

Really, this stuff is just too close to being real sometimes. And those Bugs Bunny references just slay me.

December 14, 2006

Google Patent Search: Mad inventors of the world, rejoice!

Once again, the 800 pound gorilla of the search world has weighed in with a potentially dominant product in a niche market -- patent searching. And we certainly need a breath of fresh air in the free patent search world for US patents. So far, we've mostly been stuck with the USPTO site, which isn't that great and which makes you search like mad for a good .tiff file viewer. The work-around has always been to use the USPTO site for search then use one of the other free services to actually get a readable pdf of the patent itself. Weird, cludgy and annoying. Try explaining it to a group of engineering students.

The good news is that the Google system makes it easy to search and view patents without all that fuss and bother. There's lots of information about the product on the Help and FAQ pages.

As usual with google products, they may have rushed it out the door a little too quickly without thinking about some really important functionality:


  • Sorting by issue date or patent number. Really, doing a search and finding patents from 1926 at the top of the list is kind of crazy. Try a few searches and you'll see what I mean. They absolutely need to allow sorting by some key fields. This is the most important feature to add.
  • The Advanced Search Page is nice, letting you specify what you'd expect. However, the boxes for US and International classification codes don't have links to lists of those codes.
  • As usual, they don't tell us exactly what they have in their database. They mention in the FAQ that they're out of sync with the USPTO database by a couple of months and that they don't have applications, but they really should tell us the exact cutoff, as well as emphasizing that serious patent searches should also consult the USTPO database to pick up anything that's missing.
  • I may have missed it, but I could find an easy way to print out the whole patent document. On the other hand, if I did miss it, it probably means that it's hidden pretty well and the usability of that feature needs to be addressed. Being able to print the patents seems pretty important.

Despite those caveats, I quite like the new product. The patent detail pages are nicely laid out with good cross-referencing. I'll certainly be adding it to my patent searching repertoire when I'm teaching engineering or other patent-friendly IL classes -- but not with a really top billing until the sorting issue is fixed. Via GoogleBlog.

Update: It seems that I spoke too soon about recommending this product even provisionally. According to Carolyne Sidey on her blog and the SLA Toronto listserv, it seems that Google Patents is radically under representing patent counts for at least some assignees. Her example of Xerox yields 18000 on the USPTO database but only 1200 on Google. I tried searching on Google itself, with 29 on their own product and 36 on the USPTO. I agree with Carolyne that Google Patent Search can't be recommended until this is cleared up.

December 13, 2006

Literature Roundup

It's been quite a while since I've done one of these, so I'll give even shorter tastes of the TOCs than usual to keep the post from getting out of control. They're all mostly well worth clicking through to check the rest of the articles:

Hey there Nerac!

Not sure what the protocol is on bringing up this sort of thing, but I've noticed that incoming hits from the Nerac intranet have been burning up the wires around here (relatively speaking, of course) the last day or two. Don't be shy! I'd be interested in knowing what you're thinking, either in a comment to a post or, if you all are feeling shy, emailed to jdupuis at yorku dot ca. Thanks and happy blogging.

December 11, 2006

Review of King of infinite space: Donald Coxeter, the man who saved geometry by Siobhan Roberts

From the other blog:

I'm reading a lot of science auto/biography these days, and generally enjoying it a lot. While generally not much of a fan of the "great man" theory of science history, I also tend to like a really good story...

Full review here.

December 7, 2006

My Job in 10 Years: Collections: Further Thoughts on Abstracting & Indexing Databases

To recap:

My Job in 10 Years:



(PDF version for printing here.)

The last one of these posts from way back in the fall of 2005 sparked a bit of response, in the comments and in an email from Roddy Macleod of EEVL, taking issue with my implication that traditional fee-based bibliographic databases are going the way of the dodo bird. The main bone of contention seems to be based around the value of subject-based indexing and thesauri provided in bibliographic databases versus the lack thereof in that most popular of free web search engines: Google.

Before I go too much further, I think I should clarify exactly what I’m talking about when I say Google / Google Scholar. I think I tend to use the lower case version, google, almost as a one would say kleenex for tissue. I mean specialized and general freely available search engines that can be used for scholarly research. So, Google Classic, Google Scholar and the new kid on the block, Windows Live Academic all the others that exist now and will exist in the next 10 years. Let’s just call it googlesoft.

One of the interesting things about speculating about the future, is that everyone has a different take on what’s in store; of course, this is half the fun if we’re all thinking about the future at the same time, we get to bounce ideas off each other and hopefully change and grow our own conceptions. There are also, I guess, two ways to speculate about the future: first of all, how we think things are going to turn out and second, how we would like things to turn out. Dystopian versus utopian, in a way. I guess many have viewed my speculations as quite dystopian and I can live with that. Certainly, I view our roles as professional librarians as evolving quite a bit over time in particular from being a group who had a definite service to offer that the customers really had no choice but to use (the situation up until the early 1990’s in many ways) to being a group that has to make a case for their usefulness to their potential customers (ie. the net generation, millennials, whatever you want to call them). This group of potential customers have lots of choices about how they are going to do the research they need, both for their courses and, for faculty members and grad students, the work that makes up their thesis and research work. And we must not forget that today’s connected millennials are going to be the new faculty members in that magic 10 years. Certainly the habits and expectations they have now will manifest themselves in their new, adult roles. If anything, they will be intensified. Current faculty members are certainly attached to the system of journals and conferences, to publishing monographs, to the apparatus of scholarly publishing that we know. But, will they retain this attachment and will their new colleagues have that attachment at all? I suspect the answer to this question is that, no, they will not retain the same level of attachment to the journal/conference/monograph culture that we have grown used to. Or at least not in the same way as in the past and particularly not the STEM crowd.

So, what does all this have to do with googlesoft?

It has to do with expectations of simplicity, it has to do with the desire to find rather than search, it has to do with convenience and most of all, it has to do with “good enough.”

Librarians place a high value on subject classifications, controlled vocabularies and all those parenthood issues. And they’re incredibly powerful tools to make our databases easier and more useful. But, when you get right down to it, formal human-generated subject classifications aren’t the only strategies for deciding what a particular document is about. There are informal human-generated classifications that can be useful (ie. folksonomies) as well as automated text mining subject classification methods. That is certainly an active area of research and development and it’s not hard to imagine that a lot of progress is going to be made in the next decade. And certainly, these automated and informal methods are going to be an awful lot cheaper than formal human ones. And that’s going to be important, because it’s also very important that googlesoft remain free to users. And text mining isn’t the only way to decide what a document is about. There is also relevance ranking via links, a popular method that googlesoft et al already use to find the most relevant documents in a search. Will formal, human-created article metadata disappear completely in just 10 years? I doubt it, but we definitely start to see the shift in that time frame. So, the first reason I think that subscription A&I databases are in trouble is because I believe that ultimately a “good enough” system of automated subject classification will be devised that will work in tandem with user-generated tagging and keyword assignment and, where necessary, human-generated formal classification. (Remember, I’m talking A&I services here, not book cataloguing which I don’t think will be affected in the same way.)

The second reason is because I think our users like using the free ones, and that they will continue to like the ones that they’ve grown up and used in elementary school and high school. They’re quick and easy, they mostly return fairly relevant hits for most clearly defined topics. It’s just easier to find “good enough” and that is not necessarily a bad thing. And this is a trend that will only get more pronounced over time as the expectation comes around to quicker, easier, more integrated, more connected, more open. Our patrons will increasingly get addicted to those things long before we see them, it’ll happen in high school. It’ll be a huge challenge for the subscription database vendors to compete with googlesoft in the coolness, openness and ease of use categories. And it will be our job to make sure our students understand how to use these search engines effectively just as it has been our job to make sure they use current products effectively.

Think for a minute. Compare the revenue and market capitalization of Google and Microsoft versus Elsevier and Thomson? (Take a look here.) Who has the resources to radically improve their products, to acquire metadata, to market and promote, to win this particular battle of free vs. fee?

Where are the publishers in all this? They want the best and widest distribution of the metadata for their publications. Whether OA or subscription-based, eyeballs looking at documents, creating impact, that is what is going to drive their business model. That is how they will justify themselves to their funders, be they governments, libraries, authors, whatever. The publishers are probably even now starting to realize that it really doesn’t matter if someone finds your document through INSPEC or Google Scholar, as long as they find it and recognize the value you as a publisher provide. Certainly, there have been studies that show that open access documents have a greater impact than non-OA; it would seem to follow that more widely available and searchable metadata would also have a greater impact for the author and publisher. Subscription A&I databases are potentially in trouble because content publishers will gladly distribute their metadata to anyone and everyone who wants it because it is impact that drives their business model.

Another thing that we must remember – as librarians, our loyalty is absolutely to our patrons, not the A&I or content publishers. Obviously, we want those organizations to do well enough to continue to be able to provide their products to us, but really that is our only interest in their survival as organizations. We value them for what they provide for our users (of course, it’s quite complicated here, as I certainly value and appreciate scholarly societies very differently than commercial publishers). Over the decades, the organizations that have helped us to provide products and services to our patrons have evolved and changed as we and our users continue to evolve and change. If today we spend a fraction on serials binding compared to 10 or 20 years ago, well, we make our decisions based on our needs and the needs of our users not the needs of our vendors. As librarians interested in free and open access to scholarly output, we enthusiastically support the Open Access movement. Good quality free discovery tools are just as much a part of the goal of providing access to that output as good quality free journals. Just as I mostly don’t care that what the business model is of companies that provide OA journals (ie. scholarly society, commercial publisher, dedicated OA publisher, somebody in their basement), I also mostly don’t care what the business model is of companies that provide freely available search engines. Can an A&I company add enough value to the metadata to make it worth paying, no matter what? Sure, look at SciFinder as a perfect example. Subscription A&I databases are potentially in trouble because librarians’ loyalty to them is contingent on the value they add to the information discovery process.

So, to sum up, my real goal is to serve my user community as best I can and if in the longer term I see an opportunity to maximize my expenditures on content or infrastructure by minimizing my expenditures on discovery tools, I will seize it. What’s the time frame for me to make this kind of shift? I think that in the next decade we will certainly start to see expenditures on A&I databases diminish as free alternatives get better and, more importantly, are perceived (by our users and, ultimately, by us too) as equivalent to the more expensive alternatives. The A&I databases that survive this shake-out will be the ones that find ways to very significantly add value to raw metadata.

As usual, I realize prognostication is a risky business at best and I may be proven completely wrong on all of this (maybe even tomorrow!) so all disagreement, debate, comments and feedback is appreciated, as a comment here or email to jdupuis at yorku dot ca.

Next up: Instruction. (Hopefully much sooner than 14 months.)

December 6, 2006

Google Librarian Newsletter on Google Scholar

The latest Google Librarian Newsletter has a couple of articles on Google Scholar. One of them is a series of profiles on the various people on the GS team. Another is a video of the overview presentation they did at ALA.

Most interestingly, however, is an interview with Anurag Acharya, Google Scholar's founding engineer.

Some fun bits:

TH: What is your vision for Google Scholar?

AA: I have a simple goal -- or, rather, a simple-to-state goal. I would like Google Scholar to be a place that you can go to find all scholarly literature -- across all areas, all languages, all the way back in time. Of course, this is easy to say and not quite as easy to achieve. I believe it is crucial for researchers everywhere to be able to find research done anywhere. As Vannevar Bush said in his prescient essay "As We May Think" (The Atlantic Monthly, July 1945), "Mendel's concept of the laws of genetics was lost to the world for a generation because his publication did not reach the few who were capable of grasping and extending it; and this sort of catastrophe is undoubtedly being repeated all about us, as truly significant attainments become lost in the mass of the inconsequential."
Yes, they do want to take over the world -- A&I databases look out. (More on that in the next day or two, he foreshadows)
TH: Why don't you provide a list of journals and/or publishers included in Google Scholar? Without such information, it's hard for librarians to provide guidance to users about how or when to use Google Scholar.

AA: Since we automatically extract citations from articles, we cover a wide range of journals and publishers, including even articles that are not yet online. While this approach allows us to include popular articles from all sources, it makes it difficult to create a succinct description of coverage. For example, while we include Einstein's articles from 1905 (the “miracle year” in which he published seminal articles on special relativity, matter and energy equivalence, Brownian motion and the photoelectric effect), we don't yet include all articles published in that year.

That said, I’m not quite sure that a coverage description, if available, would help provide guidance about how or when to use Google Scholar. In general, this is hard to do when considering large search indices with broad coverage. For example, the notes and comparisons I have seen about other large scholarly search indices (for which detailed coverage information is already available) provide little guidance about when to use each of them, and instead recommend searching all of them.
He dissembles a little here. We know that they index publisher-provided metadata. Just tell us what that is. I can understand that it's hard to figure out what's what in the stuff they crawl on the free web, but they should know what deals they've made with publishers -- that's what we want to know. Who's in and who's out. I suspect that they don't want us to know that there are some pretty significant publishers that aren't covered.
TH: Some librarians consider Google Scholar's interface too limited for sophisticated researchers. Do you plan to provide more options for manipulating or narrowing search results?

AA: Our experience as well as user feedback indicates that Google Scholar is widely used by researchers of all levels of sophistication -- from laypersons to leading experts. This is not surprising. LibQual's study of use of search habits of undergrads, graduate students and faculty members (presentation available here) shows that all three groups prefer general search engines with broad coverage and do so roughly with the same frequency.

Regarding options for narrowing and manipulating results, we do provide some on the advanced search page. However, we have found that other than time-based restrictions (to search papers from the last few years), none of these options see much use. More generally, we refine the user interface for Google Scholar based on how people actually use it. Instead of considering a laundry-list of features we may add, we consider a list of frequently-performed operations and see how well we support them. A long list of unrelated features wouldn’t be of much use. This is not surprising. For example, few of the tools in a full-featured Swiss Army knife see much use over its entire lifetime.
In other words, "good enough" is good enough for the vast majority of real researchers doing their day-to-day work, with only librarians doing the complaining. I'm actually pretty sympathetic to this point of view, with more to come on that front. (Once again, foreshadowing...)

December 5, 2006

ACM SIGMIS Database on Achieving diversity in the IT workforce:

Lots of interesting stuff here -- ACM SIGMIS Database, v37i4, Fall 2006:

It's great thata good number of these papers are available with no subscription access barriers.

December 4, 2006

Blogorama

It's been a while since I've done one of these posts, and it seems that I've added a whole ton of new feeds. The interesting thing is that I've added a few library blogs, which I haven't done in a while. As usual, any suggestions for good library or scitech blogs are appreciated.


  • Reviews.com News is a new blog for the ACM Computing Reviews service. It looks like it'll be pretty interesting, especially since they'll be highlighting new Hot Topics essays like this one on Game Theory & Electronic Markets. Organizational blogs like this one seem to go one of two ways -- either they catch on and become good sources or they seem to lose interest internally and get less interesting over time. Here's hoping the the Computing Reviews folk fall firmly in the former category.
  • Search Engine Land -- lots of posts about the search engine industry. It'll be interesting in a few months whether I end up following this blog, Search Engine Watch's blog, or both.
  • Brian Gray's newish blog, Are you 2.0 yet? is an interesting look at L2 from a scitech library perspective.
  • Library 2.0: An Academic's Perspective by Laura Cohen offers occasional thoughful & provocative essays on various issues where academic librarianship & L2 collide. Kind of like a review journal, in a way.
  • LibraryZen blog -- the blog that compliments the wonderful new search engine, LISZEN.
  • Time's Eye on Science -- I'm always willing to try new general interest science blogs, and this looks like a good one. These do tend to have a short life expectency on my feed reader, though, so it remains to be seen if it'll keep my interest long term.
  • Center for Science Writings Blog is by John Horgan's institution, so it should be good. Not too frequent posting so far.
  • Galactic Interactions -- Rob Knop's blog, mixing science, science studies, politics and other stuff.
  • Adventures in Applied Math is a good new-to-me CS blog.
  • Science Books Blog. Just what it sounds like. I'm hoping that this stays a long term, vital source of commentary and recommendations.
  • Knowing and Doing -- A blog by computer scientist Eugene Wallingford. A good blog that gives real insight into the life of an academic cs prof.
  • Egg -- Like a Bird's Egg is, as I've mentioned before, Chris Leonard's new blog. Chris is a former editor at Elsevier, now at Phys Math Central.
  • Big Monkey, Helpy Chalk and Science Musings Blog are two philosophy of science blogs. I always like to add a couple of thoseas they keep us all honest.

December 3, 2006

LabLit Survey: Best way to inspire young scientists?

Run on over to LabLit to give your answer to the survey:


  • Kid's TV programs with lots of explosions
  • Not being afraid to teach the subjects in depth
  • Intriguing scientist role models in films, books and on TV
  • Shift culture so that science is celebrated, not feared
  • Featuring scientists in celeb mags like Hello

I chose "Intriguing scientist role models in films, books and on TV" because that's where I think kids build an identification with various roles -- be it police officer, doctor, lawyer, whatever. I'm sure if someone took the trouble to invesitgate the issue that the professions that are best represented in the media are also the most popular among students and with the most diversity among those students. Kids see the stars in the media and the natural inclination is to say, "Hey, I could do that!"

December 1, 2006

Search engines & writing for scitech students

Two quickies for a Friday afternoon:

November 30, 2006

Information Architecture 3.0

Ok, I know any post that has something with the format "xxxxxxx n.0" makes us all want to poke our eyes out with dull spoon, but bear with me on this one.

Peter Morville, librarian, information architect and co-author of the classic Information Architecture for the World Wide Web, has an interesting article, Information Architecture 3.0, at his Semantic Studios site where he makes a plea of sorts for web designers to pay more attention to design considerations when they cobble together their shiny new 2.0 web sites.

[T]his future is self-evident in the undisciplined, unbalanced quest for sexy Ajaxian interaction at the expense of usability, findability, accessibility, and other qualities of the user experience.

Of course, user hostile web sites are only the tip of the iceberg. Beneath the surface lurk multitudes of Web 2.0 startups and Ajaxian mashups that are way behind schedule and horribly over budget. Apparently, nobody told the entrepreneurs about the step change in design and development cost between pages and applications.

Followed by an interesting definition of Information Architecture:
Perhaps we should take a moment, before proceeding, to review the definition of information architecture:
  1. The structural design of shared information environments.
  2. The combination of organization, labeling, search, and navigation systems within web sites and intranets.
  3. The art and science of shaping information products and experiences to support usability and findability.
  4. An emerging discipline and community of practice focused on bringing principles of design and architecture to the digital landscape.

He goes on to explore the discipline of IA, the role that information architects play and the community of practice they belong to.
Over the past decade, information architecture has matured as a role, discipline, and community. Inevitably, we’ve traded some of that newborn sparkle for institutional stability and a substantive body of knowledge. It’s for this reason that some of the pioneers feel restless. And, while I applaud their courage and entrepreneurial zeal, as they step beyond the role and the discipline, I hope (for their sake and ours) that they stay connected to the information architecture community.

For those of us who continue to embrace the role and discipline, there’s so much going on already, and the world of Information Architecture 3.0 will only bring more challenges, more opportunities, and more work.
The post has attracted a number of comments which Morville addresses very directly and honestly. Good stuff.

November 29, 2006

Back to the Basics on Science Education

That's the title of an article by Paul D. Thacker a few days ago in InsideHigherEd.

The best approach to teaching science is to understand not education, but the scientific method, according to Carl Wieman. In a speech on this idea Friday night, he began with a hypothesis: “We should approach teaching like a scientist,” he said. The outcome will rely on data, not anecdote. “Teaching can be rigorous just like doing physics research.”

*snip*

During the talk on Friday, Wieman said that traditional science instruction involves lectures, textbooks, homework and exams. Wieman said that this process simply doesn’t work. He cited a number of studies to make his point. At the University of Maryland, an instructor found that students interviewed immediately after a science lecture had only a vague understanding of what the lecture had been about. Other researchers found that students only retained a small amount of the information after watching a video on science.

*snip*

While Wieman said that he does not have all the answers for restructuring how science is taught, and added that he is still trying to figure out the best way to teach, he did offer suggestions. First, reduce cognitive load in learning by slowing down the amount of information being offered, by providing visuals, and by organizing the information for the student as it is being presented. Second, address students’ beliefs about science by explaining how a lecture is worth learning and by helping the students to understand how the information connects to the world around them.

Finally, actively engage with students, so that you can connect with them personally and help them process ideas. “We have good data that the traditional does not work, but the scientific approach does work,” he said. He added that is important that members of a technologically advanced nation that is dealing with difficult topics such as global warming and genetic modification, begin to think like scientists.
The talk was given at the recent Carnegie Foundation for the Advancement of Teaching’s centennial celebration. I think it's valuable that science faculty are engaging the needs of the students in their classes, the need to engage and, yes, entertain, students rather than just try and open up the tops of their heads and pour it all in. One of the ways to attract and retain good science students is to make it seem like fun to be a scientist, even fun to learn to be a scientist; more fun than whatever a given student's second or third choice might have been.

As we can all imagine, the comments section for the article was pretty lively, with at least one low blow:
Like Humanists

Well well well. So scientists are going to have to begin teaching like Humanists: smaller classes, real discussion, close reading, theoretical underpinnings. About time, too.

Joseph Duemer, Professor at Clarkson University
Ouch. Like no one's ever been in a boring history class? Or an overcrowded psych or poly sci? Probably no one should be too smug:
Science not the only problem

The difficulties students encounter in learning science have been well documented, and Carl Wieman has certainly been one of the heros in this story. But we should also note that we have not done so well in other very important areas as well. For example, Derek Bok in Our Underachieving Colleges refers to extensive research showing that universities and colleges have depressingly little effect on critical thinking and postformal reasoning — areas that we claim to be very good at teaching. And our lack of success in these important areas seem to be independent of major, type of institution, etc. This would seem to indicate that we all -scientist and humanist — need to pay a lot more attention to the research in teaching, as suggested by Wieman.

Lloyd Armstrong, Professor

November 25, 2006

Stop me before I post on science books again!

There seems to be something in the water as lists are everywhere these days. It must be the holiday shopping season. Well, the Globe and Mail has joined the fun with their annual Globe 100 list of the most notable books they've reviewed in the last year. There's Science & Nature section as well as relevant selections in the Biography and History sections. The list seems pretty good, if a more than a little heavy on the environmental and cognitive science books this year, but I guess that's just the way it is with book reviewing practices in newspapers. Hot topics rule the day rather than any kind of balanced coverage that would indicate real editorial direction.

In any case, if you're buying a present for a science-y person this year, you probably can't go wrong with one of these but I'm sure there are other lists with a bit more variety:


  • Reluctant Genius: The Passionate Life and Inventive Mind of Alexander Graham Bell by Charlotte Gray
  • The Reluctant Mr. Darwin: An Intimate Portrait of Charles Darwin and the Making of His Theory of Evolution by David Quammen
  • Heat: How to Stop the Planet From Burning by George Monbiot
  • The Revenge of Gaia: Why the Earth is Fighting Back -- and How We Can Still Save Humanity by James Lovelock
  • The Creation: An Appeal to Save Life on Earth by E. O. Wilson
  • Darwinism and Its Discontents by Michael Ruse
  • Pandemonium: Bird Flu, Mad Cow Disease, and Other Biological Plagues of the 21st Century by Andrew Nikiforuk
  • Theatre of the Mind: Raising the Curtain on Consciousness by Jay Ingram
  • This Is Your Brain On Music: The Science of a Human Obsession by Daniel J. Levitin
  • The Weather Makers: How We are Changing the Climate and What It Means for Life on Earth by Tim Flannery
  • Field Notes from a Catastrophe by Elizabeth Kolbert
  • Being Caribou: Five Months on Foot with an Arctic Herd by Karsten Heuer
  • Bringing Back the Dodo: Lessons in Natural and Unnatural History by Wayne Grady
  • Stumbling on Happiness by Daniel Gilbert
  • Thunderstruck by Eric Larson
A few other non-science books caught my eye as well such as The Immortal Game: A History of Chess or How 32 Carved Pieces on a Board Illuminated Our Understanding of War, Art, Science, and the Human Brain by David Shenk, A Writer at War: Vasily Grossman with the Red Army, 1941-1945, edited and translated by Antony Beevor and Luba Vinogradova and The Library at Night by Alberto Manguel.

These types of list always beg the question of what's missing. As far as I can tell, the most glaring omission this year are the David Suzuki memoir and the Donald Coxeter biography, both of which should have made the cut at very least based on an important Canadian connection. Of course, the lack of any science fiction or fantasy books in the list was particularly galling for me -- the Globe's reviewing decisions are generally quite shameful in their ignorance of fantastic fiction.

Update:
In the comments, Richard Akerman points to the running science books list for the CBC Radio show Quirks & Quarks. There's about a dozen books recommended based on 2006 shows (and many more from older shows), inlcuding another strong book with a Canadian connection that probably should have made the Globe list: Lee Smolen's The Trouble With Physics. Most of the books (and all the recent ones) have links to the audio of the show where they were discussed. Podcasts of Quirks & Quarks are also available.

November 24, 2006

Friday Fun: The last man vs machine match?

Tomorrow Undisputed World Chess Champion Vladimir Kramnik begins a six game match with the ChessBase program Deep Fritz 10.

Given the history of these types of matches, this might be the last time the human has a decent chance to win or draw. Chessbase has an English translation of a long article, The last match man vs machine? By André Schulzin, from Der Spiegel talking about the match.

Much depends on preparation. Kramnik is being assisted by the German grandmaster and openings specialist Christopher Lutz. In addition he has included a chess programmer in his team, one who will, he hopes, be able to explain to him how his opponent “thinks”.

For the preparation phase Kramnik received in May this year the latest version of Deep Fritz. The final version, the one against which he will play in Bonn, was sent to him in the middle of October. Since then he and his seconds have been able to search for weaknesses in the real thing.

That is exactly what Kramnik did in the Bahrain match. At the time he discovered that Deep Fritz 7 was not playing well in positions that included doubled pawns. As a result Kramnik played a Scotch opening against the machine, one that gave Black doubled pawns on c7 and c6.

In earlier days the youthful Deep Fritz would often be manoeuvred into positions with an isolated centre pawn by its human opponents. This is normally a weakness, but the program would defend this pawn like a tiger its cub, cleverly using the adjacent open files to do so. The weakness became a strength.

For the opening preparation against Kramnik the Deep Fritz team has hired a top grandmaster, who is a great openings specialist. But his name is a secret. This is normal in important chess tournaments, where players don’t want their opponents to know what they are planning. The exact speed of the computer and the modification to the openings book are the two unknown factors for Kramnik in this match.
The rules are a bit bizarre, as Kramnik gets to follow along with Fritz on his own computer while Fritz is in its pre-determined opening book.
As long as Deep Fritz is “in book”, that is playing moves from memory and not calculating variations, Mr. Kramnik sees the display of the Deep Fritz opening book. For the current board position he sees all moves, including all statistics (number of games, ELO performance, score) from grandmaster games and the move weighting of Deep Fritz. To this purpose, Mr. Kramnik uses his own computer screen showing the screen of the Deep Fritz machine with book display activated.

As soon as Deep Fritz starts calculating variations during the game the operator informs the arbiter. The arbiter confirms this on the screen of the playing machine and then shuts down the second screen.
This eliminates one of the computer's advantage, the ability to completely and perfectly "memorize" extremely long opening variations (customized by Chessbase's team of grandmasters to be as effective as possible against Kramnik), something a human can't do. Once Fritz is out of the book, then they get down to real tactics and strategy. It should be interesting to see who prevails in this kind of struggle. Mig has some nice commentary here.

Update: Game one was a draw, you can follow the live action here.

Update 2006.11.26:
Game two was a brutal loss by Kramnik in the worst blunder of his pro career. People are already calling it the worst move ever made by a sitting chamption. Commentary here, here and here. To show that there are no hard feelings, however, I may post one of my own really brutal blunders a bit later on.

November 23, 2006

Enough with the science books, already

Via Critical Mass, a two part interview (one, two) with science author & journalist Michael Lemonick in the Kenyon Review.

A great interview with lots of interesting bits on the life of the science writer. A taste from each of the two parts:

LL: Is it difficult for you to write science stories for things you don’t necessarily have a background in?

ML: It’s certainly harder. What that just means is that I have to ask more questions, and ask for more basic explanations than I might for other areas I’m more familiar with, but my strong belief is that with a bit of effort, I can understand pretty much any area of science at the level I need to in order to explain it to—well, to you. Except mathematics, which I don’t think is possible to write about in a coherent way. Mostly. There’s a very small number of things.

*snip*

LL: You also blog. How does writing for the blog differ for you?

ML: It’s a more informal form, and also the choice of topics is much looser. In the magazine, everything is done by committee. Everything is done by getting a group of people to agree at many levels to do the story. But the blog—I don’t get permission, I just do what I feel like. Some things I write about are very silly, some are serious, and some are argumentative. It’s more informal in writing style and also in the way I think about the whole process. I think of a blog as a conversation, and the sort of thing like where I run into a friend and say, “Oh, you’ll never guess what I just heard! Did you know they are doing such and such?”
Horgan seems to have started a bit of a meme on science books, with posts turning up all over. Horgan's third post on the worst books is here. If you poke around in the various search engines, you'll also get a lot of interesting hits. Some examples: Technorati, ScienceBlogs, Google Blog Search. From what I can tell, Richard Akerman of Science Library Pad is the only other one from the biblioblogosphere to weigh in.

November 22, 2006

While we're on the topic of science books...

Following up on yesterday's post, I thought I'd mention a couple of my favourite science book resources.


  • First of all, the Science Book Reviews by Philip Manning is great. He reviews a fair number of books, as well as listing the new books he sees every week. He also has compiled "best of the year" lists for the last few years. I find this site handy for my own interest and for collection development.

  • The Science Books Blog by Jon Turney grew out of the Royal Institution's attempt a little while back to select the Best Science Book Ever, which turned out to be Primo Levi's Periodic Table. There are lots of good lists and discussion on the blog, making it a useful addition to any science person's blogroll. The posting frequency seems to have declined a bit since the contest ended, but I do hope that Turney will keep up the good work with commentary and reviews about good science writing.

  • LabLit doesn't review a lot of non-fiction, but they do review some, such as Dawkins' God Delusion. Their mission is mostly to promote fiction about science and scientists (as opposed to science fiction), so a lot of the stuff they talk about is peripherally related to public perceptions of science and the place of science in society, topics all covered in science non-fiction as well. They did also post an article about the Royal Institution's contest mentioned above.

The book sitting on the table beside me right now is Kings of Infinite Space: Donald Coxeter, the Man Who Saved Geometry by Siobhan Roberts. I'm about 100 pages into it and am enjoying it tremendously.

Invitation to Digiblog

An interesting new member of the biblioblogosphere has just come on stream. Digiblog describes itself this way:

Until the the ALCTS Midwinter Symposium in Seattle begins, Digiblog will be the home for discussing controversial statements relevant to all those interested in future of library collections, technical services, and services to users. Statements and opinions expressed on Digiblog represent their authors' views only and do not represent the viewpoint of ALCTS.
ALCTS stands for the Association for Library Collections & Technical Services. While it seems that this blog has only been envisioned as a temporary place to start some pre-conference discussions, I hope that the members of ALCTS find a way to make it a permanent blog; collections and technical services are vitally important issues for libraries and librarians and I don't think they get quite the play in the biblioblogosphere as they deserve. Drop by and take a look at their two contoversial statements posts to see what I mean.via Cindy Hepfer on ERIL-L

November 21, 2006

Best and worst science books

John Horgan is helping us all set up our reading lists for the coming holiday season by highlighting a couple of lists of best science books. First he has some critical comments on the recently published Discover Magazine list of 25 best science books. As an antidote to the flaws he sees in that list, he also turns us in the direction of the The Center for Science Writings of the Stevens Institute of Technology where there's a list-in-progress of the 100 Greatest Science Books. That list is up to number fifty and still accepting nominations.

Perhaps even more interesting, Horgan gives us a list of the Ten Worst Science Books.


  1. Capra, Frifjof, The Tao of Physics
  2. Drexler, Eric, Engines of Creation
  3. Edelman, Gerald, Bright Air, Brilliant Fire
  4. Gladwell, Malcolm, The Tipping Point
  5. Gould, Stephen Jay, Rocks of Ages
  6. Greene, Brian, The Elegant Universe
  7. Hamer, Dean, The God Gene
  8. Kramer, Peter, Listening to Prozac
  9. Kurzweil, Ray, The Age of Spiritual Machines
  10. Murray, Charles, and Richard Herrnstein, The Bell Curve
  11. Wilson, Edward, Consilience
Actually, I guess it's eleven.

Luckily, I haven't read any of those yet although I do own the Gould and Wilson and may get around to reading at least the Wilson. I'm afraid I don't do that much better on the list of good books but the main reason for that is that the science books I have tended to read over the years have been mostly on computing or engineering topics, neither of which are terribly well covered in the Stevens or Discover lists. (Yes, I did nominate some good computing books.)

In the interests of self-improvement, I'll list a bunch (11!) of the science books that are in the two lists that I'd like to get around to reading. If and when I do get around to reading them, I'll certainly review them on the other blog where I have been trying to review more science books during my sabbatical this year.

  1. Dawkins, Richard, The Selfish Gene
  2. Diamond, Jared, Guns, Germs, and Steel
  3. Gleick, James, Chaos
  4. Hofstadter, Douglas, Godel, Escher, Bach
  5. Pais, Abraham, Subtle Is the Lord
  6. Penrose, Roger, The Emperor’s New Mind
  7. Rhodes, Richard, The Making of the Atomic Bomb
  8. Watson, James, The Double Helix
  9. Weinberg, Steven, The First Three Minutes
  10. Sagan, Carl, The Cosmic Connection
  11. Gould, Stephen Jay, The Mismeasure of Man
A few of these I already have lying around the house, so there's a pretty good chance they'll turn up sooner rather than later.

Update: Horgan follows up with another post, expanding on his reasons for putting The Bell Curve and Listening to Prozac on the worst book list.

November 17, 2006

Technology Leaders: Scientific American 50

Via the SciAm Blog, a story on the magazine's latest list of top scitech leaders in various walks of life.

The subsections of the article are:


All interesting reading. The first 3 links profile the three most significant leaders. The last link profile all the rest that are listed in the SA 50 Winners and Contributors. Perhaps not surprisingly, the top policy leader is Al Gore -- if you haven't seen An Inconvenient Truth yet, rush out before it's too late.

November 15, 2006

The computer book market

Tim O'Reilly of O'Reilly Books had a two part post a couple of weeks ago on the state of the computer book market.

In Part One, he takes a look at the overall trends in the market:

There's little to say about this picture that will cheer up computer book publishers or authors. The market continues to bump around at about the same level as it has for the past three years. Some publishers express hope that the release of Microsoft Vista and the next release of Office will boost results going into next year, but so far, no new technology release has been able to move the needle for long. We suspect that the combination of increasingly sophisticated online information, easier to use Web 2.0 applications, and customer fatigue with new features of overly complex applications, combined with the consolidation of the retail book market, mean that the market will never return to its pre-2000 highs, despite new enthusiasm for Web 2.0 and the technology market in general. In addition, new distribution channels (including downloadable PDFs) are growing up as retailers allocate less space to computer books.


In Part Two, he looks at how individual technologies are doing in the book market.

His broad comments on the overall trends:
  • Web Design and Development has been the most substantial bright spot in the market, with 22% year-on-year growth in this category. This might well be expected in a period in which Web 2.0 is the buzzword du-jour. In addition to breaking topics like Ruby on Rails, AJAX, Javascript, and ASP.Net, there's been nice growth in books on web design and web page creation. Books on blogging and podcasting have also finally caught on, after several prior false starts.
  • Microsoft's server release earlier in the year is still driving strong sales of books on C#, Visual Basic, and SQL Server. However, other database topics are also up modestly.
  • The growth in books on digital photography has slowed considerably. If not for the inclusion of the iPod category, the Digital Media supercategory would be flat.
  • The hardest-hit part of the market was books on consumer operating systems, down 17% from the same period a year ago.
  • The professional development and administration segment was down 2%, but might have been worse but for the strong performance of Microsoft languages, Python, Ruby, software project management, and database topics.

I'll briefly summarize for each technology he covers:

  • Computer Languages -- Java is down, while web programming languages like Ruby, PHP and Javascript are up, sometimes way up.
  • Databases -- Oracle down, SQL Server, MySQL are up
  • Operating Systems -- Linux, especially Ubuntu, is up a bit, but not a fast moving category.
  • Systems and Programming -- Art of Project Management by Scott Berkun and Jennifer Tidwell's Designing Interfaces are really driving this category. Data Warehousing, Data Analysis and Agile Development are also hot topics.
  • Web Design and Development -- Books on Ruby, AJAX and ASP are hot as are Blogging and Podcasting.
  • Digital Media Applications -- Photoshop is cold, digital photography is hot.
Lots of interesting stuff here, a good view into what the general public wants to read. Similarly, this should give us an idea of what kind of books our students will be wanting to read. If the jobs are in AJAX and Ruby, those are the books they're going to want as they prepare for the job search.

As a point of interest, O'Reilly does this kind of review every quarter or so.

Hey, we knew that already

A controversially titled piece over at InsideHigherEd is causing a bit of a ruckus in the comments area.

Are College Students Techno Idiots? by Paul D. Thacker is on a report by the Educational Testing Service basically stating that students in higher education rely on Google way too much when they search and that they basically only use the first couple of results in a search without giving much thought to issues of accuracy, bias or recency.

Few test takers demonstrated effective information literacy skills, and students earned only about half the points that could have been awarded. Females fared just as poorly as males. For instance, when asked to select a research statement for a class assignment, only 44 percent identified a statement that captured the assignment’s demands. And when asked to evaluate several Web sites, 52 percent correctly assessed the objectivity of the sites, 65 percent correctly judged for authority, and 72 percent for timeliness. Overall, 49 percent correctly identified the site that satisfied all three criteria.

Results also show that students might even lack the basics on a search engine like Google. When asked to narrow a search that was too broad, only 35 percent of students selected the correct revision. Further, 80 percent of students put irrelevant points into a slide program designed to persuade an audience.
Of course, we libarians knew all this, and have been trying to make the case to faculty that we can help with this situation. It's actually nice to see an article like this in a publication like IHE since it helps raise the issue with faculty and also clearly make the case that librarians and libraries can help get students using the resouces that the faculty want them too.

I like the comment by Ross Hunt:
One would expect ETS to get this wrong end first: the idea that this is some sort of short-term processing problem (which willb e addressed by teaching them “how to evaluate information") ignores what the real problem is. Virtually none of my students have any notion of the ecology of texts — the fact that every text exists in a context out of which it comes and to which it speaks. They don’t know what a journal is, they don’t know what a scholarly article is, they don’t know what a magazine is — in the sense, in all those cases, that they don’t know how texts get where they are and why. For them, texts drop from Mars. And that’s because that’s the way they’ve been taught to see them by textbooks and isolated photocopies. My main job as a teacher of English — as I see it, anyway — is to introduce them into the world of texts and help them learn to survive in it. For ETS to tell us that they’re “Techno-Idiots” because they don’t know what nobody’s ever shown them is about what we should expect.

I find as I do more and more IL sessions for scitech students, the main thing I try to teach them is what kind of documents are available, what each type is used for and how to find each type.

This is a great article to pass around to faculty -- the title will certainly get them reading while the content should get them thinking about their friendly neighbourhood librarian.

November 14, 2006

Two from O'Reilly Radar

Two interesting posts via Tim O'Reilly:


  • O'Reilly quotes Sarah Milstein, co-author and editor of Google: The Missing Manual about the State of Search:
    Assuming search winds up lasting 100+ years, it's still in its infancy. Still, it surprises me that the presentation of Google's main search results pages barely changed in the two years from one edition of the book to the next. The main difference is that now, onebox results with specialized information appear more frequently (though randomly) at the top of results listing. At this point, I'm ready for a better results interface.

    *snip*

    Moreover, it's no longer clear that Google is a search company. They're certainly an ad-brokering network (I'm sure everyone saw the announcement last week that they're reaching into newspaper ads now, with plans for basically all major media). And they're a provider of (mostly) Web-based productivity tools of all kinds. But a lot of those activities seem to have little to do with their mission of "organizing the world's information and making it universally accessible and useful." They do seem to have organized the world's top search experts. But as a customer, I'm not sure I'm feeling the benefit of that in my everyday searching.
    And quite a few more interesting points, mostly about Google.

  • Web 2.0 Principles and Best Practices is a post about a book of the same name prepared by O'Reilly and John Musser.
    Web 2.0 is here today—and yet its vast, disruptive impact is just beginning. More than just the latest technology buzzword, it's a transformative force that's propelling companies across all industries towards a new way of doing business characterized by user participation, openness, and network effects.

    What does Web 2.0 mean to your company and products? What are the risks and opportunities? What are the proven strategies for successfully capitalizing on these changes?

    O'Reilly Radar's Web 2.0 Principles and Best Practices lays out the answers—the why, what, who, and how of Web 2.0. It's an indispensable guide for technology decision-makers—executives, product strategists, entrepreneurs, and thought leaders—who are ready to compete and prosper in today's Web 2.0 world.
    There's an excerpt here. Sounds interesting, doesn't it? Well the pdf will cost you us$375 and print+online us$395. The comments on the post are quite interesting, mostly about how the whole Web 2.0 thing is just a marketing bandwagon invented by O'Reilly.

    O'Reilly responds:
    I'm sorry that folks reading this blog are upset about the price -- it's a good sign that you guys want to read what we have to say -- but those commenters who noted that you aren't the target audience are exactly right. The document is targeted at the Forrester/Gartner customer, and believe me, the report is cheap by those standards.

    One of the things that we've noticed at O'Reilly is that because of our exclusive focus on the "alpha geeks," we tend to abandon markets as they mature -- just as the money starts coming in with corporate adoption. We're trying to pitch more products to this audience, so we don't remain solely early stage. It's still a work in progress.

    That being said, those of you who said we'd have done much better at, say $99, might well have been right. Pricing is always a bit of a crapshoot, where you're trading off volume against price.

    And those of you who are looking for something free, please note that my What is Web 2.0? article has been read by hundreds of thousands of people. (And Steve (the first commenter), you clearly need to read that article, since it makes clear that Web 2.0 is NOT Ajax.)

    It's an interesting idea. Good information costs money and if you really need to know what somebody has to say, you may very well be willing to pay a lot of money for the privelege. Kinda like scholarly publishing used to be...
More from Tim O'Reilly, maybe later today or tomorrow.

Review of Countdown: A history of space flight by T.A. Heppenheimer

From the other blog:

The decision to read this book was certainly not rocket science, even if it is a book about rocket science. An engaging and fascinating read, you don't have to be a brain surgeon to understand it either...
Full review here.

November 10, 2006

Science funding in Canada

Earlier this week, Ian Urquhart of The Toronto Star had a very illuminating article, T.O.'s research crossroads, commenting on the future of scientific research funding here in Toronto and in Canada as a whole:

"For all its current research and industrial strengths, it is by no means certain that the Toronto region can continue to prosper," says the report by a team of researchers at the University of Toronto.

"The remarkable growth in global competition in advanced technology industries, together with the major investments being made by governments around the world to strategically support research and innovation, present a major challenge to the Toronto region."

*snip*

Meanwhile, says the report, under the Conservative government in Ottawa, key research agencies "have either reached the end of their terms or have received no word as to their future funding."

In this respect, the report names the Canada Foundation for Innovation (which funds research infrastructure), Genome Canada (which supports genetic research), CANARIE (a national high-bandwidth network for research), and the Canada Research Chairs program (under which research talent is recruited to Canada).

"Undoubtedly, Canada is at a crossroads," says Ross McGregor, president of the Toronto Region Research Alliance, in a preface to the report. "Will our national government choose to make the dramatic investments which will move us into the top tier of innovation-intensive countries in the world? Or will we be satisfied with the status quo, which will effectively mean falling behind in the international R&D arena?"
The Globe and Mail's James Rusk weighs in as well:
Toronto ranked second only to the Boston area when measured by the number of science- and engineering-related papers published.

But it fell to fifth place in terms of patent application in the United States, which the researchers took as the measure of the Toronto region's performance in commercializing research.

The researchers also found that Canada does not support research and development as strongly as other countries in the study, such as Sweden, the United States and Singapore.

They also found that, while the Toronto region is on the cusp of becoming "one of the world's true megacentres of research and advanced technologies," it gets only 21 per cent of federal funding for research and development -- even though 35 per cent of all R&D in Canada is done in the area.
The report, prepared by the Toronto Region Research Alliance, mentioned in the article is here.

I've also had hanging around an article by David Crane in The Star, from October 22nd, Canada must find, exploit new talents which is more directly about the patents issue.
One measure of our ability to turn the results of our efforts in research and development into potentially commercial possibilities is the rate at which we generate new patents. Patents are a legal recognition that an idea is unique and deserves protection, allowing the inventors either to proceed to commercialize the invention themselves or to license it to others. It is one way, though not the only way, of measuring the results of our investments in research and development.

This past week the World Intellectual Property Office, known as WIPO, published its annual report, which showed that we may be getting a poor return for our research investments and that these numbers should be a cause for concern in Canada.

*snip*

We live in what is known as a knowledge economy, where ideas are the new currency — represented by talented people and the discoveries they make. In Canada, companies such as Research in Motion and CAE Inc., , the maker of aircraft flight simiulators, are examples where the ideas and knowledge, not the factory buildings, are the real assets of the enterprise. Indeed, in many businesses today, the real value is not in physical assets but in what we call intangibles such as ideas, skills and reputation. Microsoft is an example. This is the way of the future.

The future will also be a world of much greater competition, much faster development of new ideas, and where brainpower and capacity for risk will be the hallmarks of success. Today's marvel will quickly become tomorrow's commodity — the cellphone is a prime example and the laptop computer another. This is why The Economist recently had a whole section on the global search for talent. The level of risk aversion in our investing community and among public policy makers does not augur well for Canada's future.


The WIPO report is here.

My point here, and I do have one, is that as a society we have to come to grips with how we value science and technology. Do we want to continue to be hewers of wood and drawers of water or do we want to step up and take our place in a new world? The wood and water (and fish and oil) tend not to provide large numbers of high-paying jobs, especially when the Canadian operations are merely branch plants of foreign-owned multinationals. If we don't put our brains to work, we'll fall behind those that do. There are positive signs, as mentioned in the articles above, including a growing acceptance that the environment is an important issue and that science has provided a very good insight into what is happening in our ecosystem and that many of the things we need to do going forward will grow out of science and engineering as well as politics and economics. It would be nice if scientists and engineers had as much clout and respect and were valued as much as politicians, economists, journalists, lawyers and all the rest. When was the last time you saw a tv drama about scientists or engineers?

In all comes down to the idea that we must challenge our governments to take scientific and technical issues seriously and to value the scientists and engineers that do the research in a way that has never really happened here. An awful lot depends on it.