December 29, 2006

Time to update your feeds

Well, there had to be a snag, didn't there.

It seems that when you update your template in the new Blogger, it gets rid of your old feed, in my case the http://jdupuis.blogspot.com/atom.xml. In other words, the feed the vast majority of my subscribers use. You figure Blogger could maintain the old feed for simplicity's sake, or at least make it very apparent BEFORE you upgrade that you feed is going to disappear. That way, you could send out a message to let your subsribers know that it's going to happen while you still can. Of course, after the upgrade, it's too late.

In any case, the new feed is http://jdupuis.blogspot.com/feeds/posts/default.
The FeedBurner feed also seems to be still working.

I knew I should have done all my experimenting on the other blog.

Update: Either Blogger or Bloglines seems to have adjusted and picked up the feed with the old url. Never mind, everything seems to be ok now.

Update 2007.01.01: The feed here seems to be broken. I'm going to try and fix it, but I'm not hopeful. If by some miracle, you use that feed and happen to be reading this anyway, one of other feeds is a better idea at this point.

December 28, 2006

Welcome to the new look

Well, it's been 4+ years since I changed the template here, so I guess I was about due. I choose Minima Stretch, but I'm not sure about the stretch part -- that the posts take the full width of the screen rather than just a narrower part of it. I'll give it a few days to sink in then see what I think. If anyone has any thoughts on the new look and/or template, please chime in. I'm pretty design-impaired. As well, I curious about how the new Blogger template affects the various feeds.

December 22, 2006

Year end

I might have one or two more posts for today or tomorrow, then that'll be it for this year. I'll be back sometime during the first or second week of January. During the break, I'll be (finally) upgrading to the new version of Blogger next week and will probably do a test post or two, but those I'll just delete when I'm done. I may also review a science book or two on the other blog, posting links here, but that's probably the extent of it.

Looking forward to next year, some notes on what I have in mind, from a blogging point of view:


  • I'm pretty happy with my sabbatical posting frequency of 3-5 posts per week and will endeavor to maintain that rate. I also hope to maintain the current ratio of longer vs. shorter posts. There's been a bit of an increase in hits on the blog, most likely due to a combination of more thoughtful posts and greater frequency. The most recent My Job in 10 Years post was probably my most popular post ever, in terms of hits, although curiously it didn't seem to generate a lot of mentions in other parts of the biblioblogosphere leaving me to wonder how it got so many hits.

  • I hope to complete the My Job in 10 Years series in the January/February time frame. The remaining chapters will probably include posts on Instruction, Physical & Virtual Spaces, Outreach to faculty and students and some sort of Conclusion. I'm hoping that none of them get out of control like the Collections posts and end up at thousands of words.

  • I've had post bubbling under for several months now where I want to take a close look at Google Scholar and see how it could become a really good tool for general purpose research.

  • The conference (and conference blogging) travel picture is still uncertain. SLA in Denver seems likely, as does Computers in Libraries. Also the World Horror Convention is in Toronto in 2007.

  • I'm certainly attending the Ontario Library Association Super Conference, where I'll be presenting and convening. I definately plan to blog the conference. And since York is hosting WILU in 2007, I'm pretty sure I attend and blog that conference as well.


Lots of bloggers seem to be doing this year end summary meme, so I'll join it. What you do is copy down the first sentence for the first post of each month for the year.

January: A potpourri of bookmarks sitting around my desktop, many only vaguely related to our core mission here.

February:
There's lots of jargon and acromym-speak out there in the tech world and sometimes it can be hard to wade through it all, especially when something new and hot appears on your radar and you want ot get a handle on it.

March: Another long article comparing Wikipedia to Britannica, this time at Information Today.

April: From Marika Asimakopulos on PAMNET.

May: From The Infinite Matrix, Eileen Gunn's haunting Tour of Chernobyl.

June: As usual, O'Reilly has a good idea that other scitech publishers should take note of.

July: Brian Gray at e3 Information Overload is hosting the latest Tangled Bank carnival.

August: But I probably won't resume regular blogging for a few more days.

September: In the back-to-school spirit, via John Scalzi, a list of Top Ten No Sympathy Lines (Plus a Few Extra) by Steven Dutch, Natural and Applied Sciences, University of Wisconsin - Green Bay.

October: I must admit to getting addicted to all the A/V stuff going on out there in the science-y web.

November: From Ubiquity (all OA)

December: Two quickies for a Friday afternoon.

Conclusions? I seem to like doing laundry list posts at the beginning of the month, but me noting down my reading lists has always been an integral part of this blog anyway. Also, I seem to like to use links and quotes to start off my own posts and get my own thoughts going. Also not much of a surprise.

In any case, for those of you celebrating something this time of year (even if only a brief respite), enjoy.

Update: Added WILU to conference list.

December 20, 2006

Web 2.0, baah… it’s just the Internet!

That's the title of a recent post by Rob Knopp commenting on the recent We Are the World article in Time magazine. For those who've missed it, Time has named basically everyone as Person of the Year in recognition of the explosion of the interactive web. Since we all create content for the web, we're all People of the Year. Except Canadians, of course. Our Person of the Year is new PM Stephen Harper.

Well, I'm always a bit skeptical of these grand pronouncements, even though with three blogs I'm as much a person of the year as anyone, except for the Canadian thing, of course. And Rob Knopp is a bit skeptical of all the hype too. After all, it's just the Internet!

The idea is that in 2006, more than any individual, the growth and explosion of many individuals doing lots of small things has influenced our culture. Arguably, the most important visual media force in 2006 was online video sharing sites like YouTub, and arguably, the most important publishing phenomenon was the explosion of blogs (most of which are crap, a very of which have been groundbreaking).

The Internet is what makes a lot of this possible.

[snip]

The Web of today is an enhanced, improved, and accessorized, but not fundmentally different, version of the Web of Tim Berners-Lee 15 years ago. More than that, it’s the Internet of at least 1986 (the first time I became vaguely aware of it), and probably before that. There’s no revolution. There’s just more people using it, and more people realizing what it really is. The hurdle to get over was the overhyped dotcom Web of the late 1990s… that was where nobody “got” the Web or the Internet, and instilled in the popular consciousness a false idea about its nature.

[snip]

Here’s the idea of the Internet : every user is a peer. We can all be both information producers and consumers. We both send and receive. It’s a conversation, not consumption.

The Internet was designed this way from the start. You hear music and movie lobbyists talking about “Peer to Peer” software as if it were some recent evil created to destroy them, to corrupt the wholesome Internet with a new and terrible way of doing things. Well, no. Peer-to-peer is simply how the Internet works, no matter how many ISPs think it makes sense to have a “no servers” policy. In the 80’s and the early 90’s, many if not most of those on the web were academics, or students at Universities. Indeed, it was physicists who created the Web to share information. And those of us on the net back then got it. Yeah, not very many people were putting up web pages with a lot of information. However, the fraction doing that was much than the fraction of newspaper readers who wrote newspaper articles or published a newspaper.

Well, there's a lot more, but you can just read it yourself. It's nice to have a buzz word for everyone to rally around, but we shouldn't lose sight of the fact that the web has been around for quite a while now and will certainly be around for a lot longer. Let's work to make it a better, more interactive, more informative and even more anarchic place but let's not let the big media companies tell us what's worthy or what's hot.

December 19, 2006

Inkling has arrived!

The inkycircus dynamic duo of Anna Gosline and Anne Casselman have launched a new popular science webzine called Inkling. I particularly like their subtitle: "On the hunch that SCIENCE ROCKS!"

In the LabLit interview they describe the project:

Why did you decide to launch the magazine? What are its main goals and who is the target audience?

Why? Because we think there needs to be a new voice in science journalism and we think we'd have a lot of fun helping writers and artists provide it. The main goal is really to make science more approachable, more fun, something that people can relate to better. The target audience is everyone who is curious about science but mildly turned off the heavier publications out there today. And according to the big science magazine's readership stats, many of those people are women.


Which I think is a great idea, to have a fun, accessible magazine that women and men, boys and girls, can all enjoy and learn from. Is dumbing down a possible issue? Sure, but lets give them the benefit of the doubt and enjoy the first crop of articles -- they are all short and punchy, fun but with some good content. From the about page:
Inkling is an often updated magazine on the web dedicated to science as we see it. Founded in late 2006, we cover the science that pervades our life, makes us laugh, and helps us choose our breakfast foods. We aim to capture a larger proportion of female readers, but, of course, everyone is always welcome.

Some of the articles from the first issue, definately showing a bit of flare and sauciness not often seen in other science publications.
And a whole bunch of others, all well worth checking out. Email subscription and rss notification are both available for the 1.0 and 2.0 amongst us. via inkycircus which was via LabLit.

December 16, 2006

Write a book about your entire book stock and turn left at Albuquerque

Think of this post as a slightly late Friday Fun post...

Via the Information Literacy Weblog, I came across the mind-stunningly hilarious Library 2.0 Idea & Equation Generator!

The basic idea is to create a kind of comedy mashup of various silly and real ideas to create even more silly and, occasionally surprisingly interesting real ideas.

Check the comments out in the blog post for some more random silliness. I'll put a few of the ones I got below:


  • mashup communities of interest and apply a liberal amount of lipstick
  • evangelize about your patrons and call them a ''perpetual beta''
  • revitalize eBay and use it to replace all of your librarians
  • social networks + Netflix model = Second Life
  • Michael Gorman + Casey Bisson's WordPress OPAC = patron's privacy
  • Inter Library Loans + OPAC = Library 2.0

Really, this stuff is just too close to being real sometimes. And those Bugs Bunny references just slay me.

December 14, 2006

Google Patent Search: Mad inventors of the world, rejoice!

Once again, the 800 pound gorilla of the search world has weighed in with a potentially dominant product in a niche market -- patent searching. And we certainly need a breath of fresh air in the free patent search world for US patents. So far, we've mostly been stuck with the USPTO site, which isn't that great and which makes you search like mad for a good .tiff file viewer. The work-around has always been to use the USPTO site for search then use one of the other free services to actually get a readable pdf of the patent itself. Weird, cludgy and annoying. Try explaining it to a group of engineering students.

The good news is that the Google system makes it easy to search and view patents without all that fuss and bother. There's lots of information about the product on the Help and FAQ pages.

As usual with google products, they may have rushed it out the door a little too quickly without thinking about some really important functionality:


  • Sorting by issue date or patent number. Really, doing a search and finding patents from 1926 at the top of the list is kind of crazy. Try a few searches and you'll see what I mean. They absolutely need to allow sorting by some key fields. This is the most important feature to add.
  • The Advanced Search Page is nice, letting you specify what you'd expect. However, the boxes for US and International classification codes don't have links to lists of those codes.
  • As usual, they don't tell us exactly what they have in their database. They mention in the FAQ that they're out of sync with the USPTO database by a couple of months and that they don't have applications, but they really should tell us the exact cutoff, as well as emphasizing that serious patent searches should also consult the USTPO database to pick up anything that's missing.
  • I may have missed it, but I could find an easy way to print out the whole patent document. On the other hand, if I did miss it, it probably means that it's hidden pretty well and the usability of that feature needs to be addressed. Being able to print the patents seems pretty important.

Despite those caveats, I quite like the new product. The patent detail pages are nicely laid out with good cross-referencing. I'll certainly be adding it to my patent searching repertoire when I'm teaching engineering or other patent-friendly IL classes -- but not with a really top billing until the sorting issue is fixed. Via GoogleBlog.

Update: It seems that I spoke too soon about recommending this product even provisionally. According to Carolyne Sidey on her blog and the SLA Toronto listserv, it seems that Google Patents is radically under representing patent counts for at least some assignees. Her example of Xerox yields 18000 on the USPTO database but only 1200 on Google. I tried searching on Google itself, with 29 on their own product and 36 on the USPTO. I agree with Carolyne that Google Patent Search can't be recommended until this is cleared up.

December 13, 2006

Literature Roundup

It's been quite a while since I've done one of these, so I'll give even shorter tastes of the TOCs than usual to keep the post from getting out of control. They're all mostly well worth clicking through to check the rest of the articles:

Hey there Nerac!

Not sure what the protocol is on bringing up this sort of thing, but I've noticed that incoming hits from the Nerac intranet have been burning up the wires around here (relatively speaking, of course) the last day or two. Don't be shy! I'd be interested in knowing what you're thinking, either in a comment to a post or, if you all are feeling shy, emailed to jdupuis at yorku dot ca. Thanks and happy blogging.

December 11, 2006

Review of King of infinite space: Donald Coxeter, the man who saved geometry by Siobhan Roberts

From the other blog:

I'm reading a lot of science auto/biography these days, and generally enjoying it a lot. While generally not much of a fan of the "great man" theory of science history, I also tend to like a really good story...

Full review here.

December 7, 2006

My Job in 10 Years: Collections: Further Thoughts on Abstracting & Indexing Databases

To recap:

My Job in 10 Years:



(PDF version for printing here.)

The last one of these posts from way back in the fall of 2005 sparked a bit of response, in the comments and in an email from Roddy Macleod of EEVL, taking issue with my implication that traditional fee-based bibliographic databases are going the way of the dodo bird. The main bone of contention seems to be based around the value of subject-based indexing and thesauri provided in bibliographic databases versus the lack thereof in that most popular of free web search engines: Google.

Before I go too much further, I think I should clarify exactly what I’m talking about when I say Google / Google Scholar. I think I tend to use the lower case version, google, almost as a one would say kleenex for tissue. I mean specialized and general freely available search engines that can be used for scholarly research. So, Google Classic, Google Scholar and the new kid on the block, Windows Live Academic all the others that exist now and will exist in the next 10 years. Let’s just call it googlesoft.

One of the interesting things about speculating about the future, is that everyone has a different take on what’s in store; of course, this is half the fun if we’re all thinking about the future at the same time, we get to bounce ideas off each other and hopefully change and grow our own conceptions. There are also, I guess, two ways to speculate about the future: first of all, how we think things are going to turn out and second, how we would like things to turn out. Dystopian versus utopian, in a way. I guess many have viewed my speculations as quite dystopian and I can live with that. Certainly, I view our roles as professional librarians as evolving quite a bit over time in particular from being a group who had a definite service to offer that the customers really had no choice but to use (the situation up until the early 1990’s in many ways) to being a group that has to make a case for their usefulness to their potential customers (ie. the net generation, millennials, whatever you want to call them). This group of potential customers have lots of choices about how they are going to do the research they need, both for their courses and, for faculty members and grad students, the work that makes up their thesis and research work. And we must not forget that today’s connected millennials are going to be the new faculty members in that magic 10 years. Certainly the habits and expectations they have now will manifest themselves in their new, adult roles. If anything, they will be intensified. Current faculty members are certainly attached to the system of journals and conferences, to publishing monographs, to the apparatus of scholarly publishing that we know. But, will they retain this attachment and will their new colleagues have that attachment at all? I suspect the answer to this question is that, no, they will not retain the same level of attachment to the journal/conference/monograph culture that we have grown used to. Or at least not in the same way as in the past and particularly not the STEM crowd.

So, what does all this have to do with googlesoft?

It has to do with expectations of simplicity, it has to do with the desire to find rather than search, it has to do with convenience and most of all, it has to do with “good enough.”

Librarians place a high value on subject classifications, controlled vocabularies and all those parenthood issues. And they’re incredibly powerful tools to make our databases easier and more useful. But, when you get right down to it, formal human-generated subject classifications aren’t the only strategies for deciding what a particular document is about. There are informal human-generated classifications that can be useful (ie. folksonomies) as well as automated text mining subject classification methods. That is certainly an active area of research and development and it’s not hard to imagine that a lot of progress is going to be made in the next decade. And certainly, these automated and informal methods are going to be an awful lot cheaper than formal human ones. And that’s going to be important, because it’s also very important that googlesoft remain free to users. And text mining isn’t the only way to decide what a document is about. There is also relevance ranking via links, a popular method that googlesoft et al already use to find the most relevant documents in a search. Will formal, human-created article metadata disappear completely in just 10 years? I doubt it, but we definitely start to see the shift in that time frame. So, the first reason I think that subscription A&I databases are in trouble is because I believe that ultimately a “good enough” system of automated subject classification will be devised that will work in tandem with user-generated tagging and keyword assignment and, where necessary, human-generated formal classification. (Remember, I’m talking A&I services here, not book cataloguing which I don’t think will be affected in the same way.)

The second reason is because I think our users like using the free ones, and that they will continue to like the ones that they’ve grown up and used in elementary school and high school. They’re quick and easy, they mostly return fairly relevant hits for most clearly defined topics. It’s just easier to find “good enough” and that is not necessarily a bad thing. And this is a trend that will only get more pronounced over time as the expectation comes around to quicker, easier, more integrated, more connected, more open. Our patrons will increasingly get addicted to those things long before we see them, it’ll happen in high school. It’ll be a huge challenge for the subscription database vendors to compete with googlesoft in the coolness, openness and ease of use categories. And it will be our job to make sure our students understand how to use these search engines effectively just as it has been our job to make sure they use current products effectively.

Think for a minute. Compare the revenue and market capitalization of Google and Microsoft versus Elsevier and Thomson? (Take a look here.) Who has the resources to radically improve their products, to acquire metadata, to market and promote, to win this particular battle of free vs. fee?

Where are the publishers in all this? They want the best and widest distribution of the metadata for their publications. Whether OA or subscription-based, eyeballs looking at documents, creating impact, that is what is going to drive their business model. That is how they will justify themselves to their funders, be they governments, libraries, authors, whatever. The publishers are probably even now starting to realize that it really doesn’t matter if someone finds your document through INSPEC or Google Scholar, as long as they find it and recognize the value you as a publisher provide. Certainly, there have been studies that show that open access documents have a greater impact than non-OA; it would seem to follow that more widely available and searchable metadata would also have a greater impact for the author and publisher. Subscription A&I databases are potentially in trouble because content publishers will gladly distribute their metadata to anyone and everyone who wants it because it is impact that drives their business model.

Another thing that we must remember – as librarians, our loyalty is absolutely to our patrons, not the A&I or content publishers. Obviously, we want those organizations to do well enough to continue to be able to provide their products to us, but really that is our only interest in their survival as organizations. We value them for what they provide for our users (of course, it’s quite complicated here, as I certainly value and appreciate scholarly societies very differently than commercial publishers). Over the decades, the organizations that have helped us to provide products and services to our patrons have evolved and changed as we and our users continue to evolve and change. If today we spend a fraction on serials binding compared to 10 or 20 years ago, well, we make our decisions based on our needs and the needs of our users not the needs of our vendors. As librarians interested in free and open access to scholarly output, we enthusiastically support the Open Access movement. Good quality free discovery tools are just as much a part of the goal of providing access to that output as good quality free journals. Just as I mostly don’t care that what the business model is of companies that provide OA journals (ie. scholarly society, commercial publisher, dedicated OA publisher, somebody in their basement), I also mostly don’t care what the business model is of companies that provide freely available search engines. Can an A&I company add enough value to the metadata to make it worth paying, no matter what? Sure, look at SciFinder as a perfect example. Subscription A&I databases are potentially in trouble because librarians’ loyalty to them is contingent on the value they add to the information discovery process.

So, to sum up, my real goal is to serve my user community as best I can and if in the longer term I see an opportunity to maximize my expenditures on content or infrastructure by minimizing my expenditures on discovery tools, I will seize it. What’s the time frame for me to make this kind of shift? I think that in the next decade we will certainly start to see expenditures on A&I databases diminish as free alternatives get better and, more importantly, are perceived (by our users and, ultimately, by us too) as equivalent to the more expensive alternatives. The A&I databases that survive this shake-out will be the ones that find ways to very significantly add value to raw metadata.

As usual, I realize prognostication is a risky business at best and I may be proven completely wrong on all of this (maybe even tomorrow!) so all disagreement, debate, comments and feedback is appreciated, as a comment here or email to jdupuis at yorku dot ca.

Next up: Instruction. (Hopefully much sooner than 14 months.)

December 6, 2006

Google Librarian Newsletter on Google Scholar

The latest Google Librarian Newsletter has a couple of articles on Google Scholar. One of them is a series of profiles on the various people on the GS team. Another is a video of the overview presentation they did at ALA.

Most interestingly, however, is an interview with Anurag Acharya, Google Scholar's founding engineer.

Some fun bits:

TH: What is your vision for Google Scholar?

AA: I have a simple goal -- or, rather, a simple-to-state goal. I would like Google Scholar to be a place that you can go to find all scholarly literature -- across all areas, all languages, all the way back in time. Of course, this is easy to say and not quite as easy to achieve. I believe it is crucial for researchers everywhere to be able to find research done anywhere. As Vannevar Bush said in his prescient essay "As We May Think" (The Atlantic Monthly, July 1945), "Mendel's concept of the laws of genetics was lost to the world for a generation because his publication did not reach the few who were capable of grasping and extending it; and this sort of catastrophe is undoubtedly being repeated all about us, as truly significant attainments become lost in the mass of the inconsequential."
Yes, they do want to take over the world -- A&I databases look out. (More on that in the next day or two, he foreshadows)
TH: Why don't you provide a list of journals and/or publishers included in Google Scholar? Without such information, it's hard for librarians to provide guidance to users about how or when to use Google Scholar.

AA: Since we automatically extract citations from articles, we cover a wide range of journals and publishers, including even articles that are not yet online. While this approach allows us to include popular articles from all sources, it makes it difficult to create a succinct description of coverage. For example, while we include Einstein's articles from 1905 (the “miracle year” in which he published seminal articles on special relativity, matter and energy equivalence, Brownian motion and the photoelectric effect), we don't yet include all articles published in that year.

That said, I’m not quite sure that a coverage description, if available, would help provide guidance about how or when to use Google Scholar. In general, this is hard to do when considering large search indices with broad coverage. For example, the notes and comparisons I have seen about other large scholarly search indices (for which detailed coverage information is already available) provide little guidance about when to use each of them, and instead recommend searching all of them.
He dissembles a little here. We know that they index publisher-provided metadata. Just tell us what that is. I can understand that it's hard to figure out what's what in the stuff they crawl on the free web, but they should know what deals they've made with publishers -- that's what we want to know. Who's in and who's out. I suspect that they don't want us to know that there are some pretty significant publishers that aren't covered.
TH: Some librarians consider Google Scholar's interface too limited for sophisticated researchers. Do you plan to provide more options for manipulating or narrowing search results?

AA: Our experience as well as user feedback indicates that Google Scholar is widely used by researchers of all levels of sophistication -- from laypersons to leading experts. This is not surprising. LibQual's study of use of search habits of undergrads, graduate students and faculty members (presentation available here) shows that all three groups prefer general search engines with broad coverage and do so roughly with the same frequency.

Regarding options for narrowing and manipulating results, we do provide some on the advanced search page. However, we have found that other than time-based restrictions (to search papers from the last few years), none of these options see much use. More generally, we refine the user interface for Google Scholar based on how people actually use it. Instead of considering a laundry-list of features we may add, we consider a list of frequently-performed operations and see how well we support them. A long list of unrelated features wouldn’t be of much use. This is not surprising. For example, few of the tools in a full-featured Swiss Army knife see much use over its entire lifetime.
In other words, "good enough" is good enough for the vast majority of real researchers doing their day-to-day work, with only librarians doing the complaining. I'm actually pretty sympathetic to this point of view, with more to come on that front. (Once again, foreshadowing...)

December 5, 2006

ACM SIGMIS Database on Achieving diversity in the IT workforce:

Lots of interesting stuff here -- ACM SIGMIS Database, v37i4, Fall 2006:

It's great thata good number of these papers are available with no subscription access barriers.

December 4, 2006

Blogorama

It's been a while since I've done one of these posts, and it seems that I've added a whole ton of new feeds. The interesting thing is that I've added a few library blogs, which I haven't done in a while. As usual, any suggestions for good library or scitech blogs are appreciated.


  • Reviews.com News is a new blog for the ACM Computing Reviews service. It looks like it'll be pretty interesting, especially since they'll be highlighting new Hot Topics essays like this one on Game Theory & Electronic Markets. Organizational blogs like this one seem to go one of two ways -- either they catch on and become good sources or they seem to lose interest internally and get less interesting over time. Here's hoping the the Computing Reviews folk fall firmly in the former category.
  • Search Engine Land -- lots of posts about the search engine industry. It'll be interesting in a few months whether I end up following this blog, Search Engine Watch's blog, or both.
  • Brian Gray's newish blog, Are you 2.0 yet? is an interesting look at L2 from a scitech library perspective.
  • Library 2.0: An Academic's Perspective by Laura Cohen offers occasional thoughful & provocative essays on various issues where academic librarianship & L2 collide. Kind of like a review journal, in a way.
  • LibraryZen blog -- the blog that compliments the wonderful new search engine, LISZEN.
  • Time's Eye on Science -- I'm always willing to try new general interest science blogs, and this looks like a good one. These do tend to have a short life expectency on my feed reader, though, so it remains to be seen if it'll keep my interest long term.
  • Center for Science Writings Blog is by John Horgan's institution, so it should be good. Not too frequent posting so far.
  • Galactic Interactions -- Rob Knop's blog, mixing science, science studies, politics and other stuff.
  • Adventures in Applied Math is a good new-to-me CS blog.
  • Science Books Blog. Just what it sounds like. I'm hoping that this stays a long term, vital source of commentary and recommendations.
  • Knowing and Doing -- A blog by computer scientist Eugene Wallingford. A good blog that gives real insight into the life of an academic cs prof.
  • Egg -- Like a Bird's Egg is, as I've mentioned before, Chris Leonard's new blog. Chris is a former editor at Elsevier, now at Phys Math Central.
  • Big Monkey, Helpy Chalk and Science Musings Blog are two philosophy of science blogs. I always like to add a couple of thoseas they keep us all honest.

December 3, 2006

LabLit Survey: Best way to inspire young scientists?

Run on over to LabLit to give your answer to the survey:


  • Kid's TV programs with lots of explosions
  • Not being afraid to teach the subjects in depth
  • Intriguing scientist role models in films, books and on TV
  • Shift culture so that science is celebrated, not feared
  • Featuring scientists in celeb mags like Hello

I chose "Intriguing scientist role models in films, books and on TV" because that's where I think kids build an identification with various roles -- be it police officer, doctor, lawyer, whatever. I'm sure if someone took the trouble to invesitgate the issue that the professions that are best represented in the media are also the most popular among students and with the most diversity among those students. Kids see the stars in the media and the natural inclination is to say, "Hey, I could do that!"

December 1, 2006

Search engines & writing for scitech students

Two quickies for a Friday afternoon: