Synaptica to be a sponsor of Taxonomy Boot Camp 2009

Synaptica LLC is proud to announce that it will be a Platinum sponsor at this year's Taxonomy Boot Camp 2009 in San Jose, California. Synaptica's new CEO, Dave Clarke and Paula McCoy of ProQuest will also be giving a joint presentation on the morning of Friday, November 20th at 8:30 AM. The presentation will compare and contrast auto-classification vs. manual content indexing and classification and how Synaptica can be a valuable part of both methods. If you haven't already, sign up today and join us in San Jose. And if you are coming, look for our table outside of the presentation area where we would love to fill you in on the recent changes with Synaptica and new developments to the software and company alike. We look forward to seeing you there!



Patrick Lambe's Survey on the Future of Taxonomy Work

Patrick Lambe has been analyzing the knowledge, skills and experience needs of the taxonomy profession for a while and as part of this his work he is conducting a survey on the present and future of taxonomy work and the needs of taxonomy professionals. Patrick is the author of a great taxonomy development book titled "Organising Knowledge: Taxonomies, Knowledge and Organizational Effectiveness" and an active writer on the topic of taxonomists and taxonomy development on his Green Chameleon blog.

Patrick, like some members of our own Dow Jones taxonomy team will also be Taxonomy Bootcamp in San Jose this November. Acording to his orginal request for survey responses, participants in the survey will also get a report of the results (which will include additional research beyond the survey).  Patrick writes: "For those of you who believe that taxonomies still have a future, this might make interesting reading, and for those of you who believe a la Theresa Regli that “taxonomies are dead”, we’d like to hear from you on why!"

You can take the survey at


Need to Create Good Work Fast? Simple - Get a New Computer

I have a problem. I have six pieces of work to write in a couple of weeks and I'm under pressure. I need the work to be spot on, of the highest quality and created in the shortest space of time.

The answer to my problem? Buy a new computer.

Does this sound strange to you? Can you see how improved output comes from a new computer?

I was sceptical, but the Sales guy said a new computer was the answer. I asked him to explain and he told me how the time I was wasting messing with my old computer was at the heart of my problem. All those lost minutes fixing crashes, worrying about blue screens, battling with slow performance, scanning for adware, spyware and worse. Forget all that was the message I was getting, move to the promised land of a newer, faster computer and your problems are solved. After a bit more chat I was sold. My new computer would save me time and that extra time would be spent devoted to my key tasks, which in turn would lead to better quality work and faster work at that. Saving time was even money in the bank for me to set against the cost of the computer - so it wasn't even as expensive as I'd thought.

At this point I excused myself, had a coffee, and thought it through one more time. Did it make sense that a new computer was my solution? The light quickly dawned, of course it didn't. A new computer wasn't the solution and time saving was not my key issue. How did the Sales guy know that time saved would be time I'd actually spend on my document tasks? How did he know the processes and tasks I'd been performing with my current computer were not valuable experiences - not to be lightly ignored. Why did he make no attempt to understand me and my circumstances and simply sell me the one size fits all Sales line that so many people still hear today?

I soon realised than I'm better off assessing my goals and objectives. What is it I need to do? For whom? Why? And when? Then I need to ensure I'm prepared and enabled to achieve them. Is my broadband connection operating? Is it fast enough? Is the right software up and running? Can I access the libraries I need?

I would also benefit from improving my time planning and management skills. I need to focus on my key tasks. What is it I need to do? What problems am I having here? I also should not forget my deliverables. What do I need to produce and how do I get there?

All these areas, when addressed in the right way, will enable my tasks and improve my outcomes. Granted, this is a little harder to sell than a new computer equals better work and a wonderful life, but surely I'm worth that extra effort and it's certainly what I need to hear.

Many of us encounter this scenario frequently. How many times have you watched a Sales presentation built around saving time? Usually a calculator is involved and sometimes members of the audience are asked to volunteer key pieces of information - "How much time do you spend searching for information in a day?", "What's your hourly rate?", "How hard do you find tracking down the information you need?" "Could you be more productive if you saved some of this time?" Very often 'time saved' is then calculated and that 'time saved' directly equated to business advantage. Very often there is little or no thought put into the needs or objectives of individual businesses or any injection of common sense into the Sales pitch.

A Dow Jones information assessment looks for the real issues and pain points our clients experience, and works with them to solve their problems and enable improved outcomes. If you have an information management issue you need assistance with, speak to us and let us work with you to get to the heart of your needs. You never know you might even save enough money to afford that new computer you've always wanted!

Passionate Geographers

I noticed a very interesting initiative recently Project Geograph: Photograph Every Grid Square.

This project is working towards collecting and making available images depicting the geography of every square kilometre of the British Isles. This ambitious project seems to be progressing very well, with many good quality images loaded to the website.

Already over 8,900 contributors have submitted nearly 1,500,000 images, with an average of 5 images associated to each geographic square across England, Wales, Scotland and Ireland.  This is a great resource, preserving in amazing detail what the British Isles looked like at the start of the 21st Century.  This is also a wonderful way to learn about the geography of these amazing islands and to dig deeply into their hills, valleys, towns and villages.  This is also a superb source for genealogists looking at how a particular part of the British Isles looks today.

Back in 2007 I attended the Blogs and Social Media Conference 2.0 in London.  One presentation which has stayed in my mind since then, was Lee Bryant's, "Engaging with Passionates". In his exceptional presentation Lee described a ground-breaking social networking case study and talked about the energy that can be released when organisations successfully tap into a group of people who are truly passionate about a given topic.

I think you'd be hard pressed to find a better example of the power of passionates than the Geograph Project.  Looking at the number of contributors, the amount of the British Isles covered, and the quality of the photography and metadata created, makes a clear point - find people who are passionate about a topic, people who are committed to a hobby or interest, engage them in the right way and they will deliver time and again.

I wish everyone associated with the Geograph Project all the luck in the world, may they stay passionate and committed to what they do, and may their project benefit from their commitment.

Oh, and if you like what you see, submit a photograph, or start a similar initiative.



Report from Digital Asset Management (DAM) Conference - London, 1 July

I spent Wednesday 1st July at the Henry Stewart DAM Conference in London.

In my slot I talked about, "Tagging Images for Findability - Making Your DAM System Work for You."  I used my 30 minutes to raise the issue of organising images using metadata and controlled vocabulary to connect the images to the people who want to use them.  I spent a little time looking at the ways to use text to categorise images and the advantages and disadvantages that brings.  I devoted a lot of the presentation to raising issues to watch out for when tagging images, in particular specificity and focus in image depictions, abstract concepts and image 'aboutness' and the deceptive simplicity of visually simple images.

A far braver presentation than mine was given by Madi Solomon. Madi ditched the PowerPoint presentation to facilitate a refreshing debate on metadata.  Questions from the floor came thick and fast.  Madi did a great job of presenting 'on the edge' and drew out the experiences of many of the attendees and the challenges they were facing.

Also of note at the conference was a very informative presentation from Theresa Regli on 'Evaluating and Selecting Technologies' and a stimulating piece from Mark Davey on the old chestnut of ROI and Digital Asset Management Systems.  Mark took a pretty dry subject and a slot directly after a good lunch and succeeded brilliantly in making it entertaining, informative and practical. Take a look at his excellent presentation Digital Asset Management ROI - the basics. I think this is a key resource for anyone interested in return on investment in the DAM space and it's fun to watch too.

I had a great day at DAM London and I hope my fellow delegates found the presentations as helpful and enlightening as I did.




Report from the ISKO Content Architecture Conference - 22-23 June, London, UK

I spent Monday and Tuesday of this week at the fascinating ISKO Content Architecture Conference.

On Monday I gave a presentation on, "Still Digital Images - the hardest things to classify and find."
My presentation looked at the image market and the ways in which images can be annotated - or is that processed, classified, categorized, tagged, keyworded… We need a controlled vocabulary to controlled the vocabulary of controlled vocabulary! 

SLA Tech Zone: Taxonomy and SharePoint -- A Powerful Combination

If you are planning to attend the upcoming SLA Annual Conference in Washington, DC, then you won't want to miss the SLA Tech Zone workshop Taxonomy and SharePoint--A Powerful Combination.


SharePoint helps your organization connect people to business critical information and expertise in order to increase productivity and reduce information overload. It achieves this by providing your employees with the ability to find relevant content in a wide range of repositories and formats. Understanding and using taxonomies within a SharePoint implementation to help users find content, is an essential part of ensuring a successful SharePoint deployment. Taxonomies can range from quite simple to very complex. In this session, we will cover the basics of evaluating what you can do to create a simple taxonomy that will yield the most benefits for your SharePoint implementations. You will have a chance to learn a range of Best Practices, from the basics of building a taxonomy to the hands-on skills of deploying that taxonomy within a SharePoint site.


This workshop is a suitable as either a quick-start or refresher in taxonomy managment for SharePoint. There are three sessions:

  • Monday, 15 June 2009 9:00AM - 10:30AM (Ticketed Event #640)
  • Monday, 15 June 2009 3:30PM - 5:00PM (Ticketed Event #660)
  • Tuesday, 16 June 2009 11:30AM - 1:00PM (Ticketed Event #805)

Price: US $35 member / US $35 non-member / US $35 student member


For details and registration information, see the SLA 2009 site.

Classifying Images Part 3: Depicted Content

Welcome back to my occasional image classification series.

The last time I raised the topic of image classification I discussed the basic attributes of images. This time I want to focus on the thornier issue of the content, or concepts, depicted in them.

There is a danger of treating an image like a piece of text and classifying its attributes: Who created it? When? What techniques were used? Then writing a title or caption and leaving it at that. Sometimes little more need be done to a document than record this kind of information, especially with free text searching, but lots more needs to be done to most images.

Image findability

Image findability is the process of using search and browse to access the images required. A major aspect of image findability relates to the things depicted in them. Image users often search for images based on the generic things in them and also the proper names of these things. Classifying images based on depicted content means considering anything and everything that is and can be depicted in an image. When considering this I like to focus my efforts on understanding the images I'm dealing with, the users who are trying to find and work with the images, and the ways in which these people need to search and browse for the images they need. After an assessment of these areas I then tailor my approach.

Broadly speaking people searching for depicted content are looking for a number of types:

  • Places: cities, towns, villages, streets...
  • Built works: parks, skyscrapers, cottages, walls, doors, windows...
  • Topography: mountains, valleys...
  • Groups and organisations: air forces, choirs, police departments...
  • People: roles, occupations, ethnicity and nationality: mothers, doctors, Caucasians, French, Germans...
  • Actions, activities and events: running, writing, laughing, smiling, birthdays, parties, book signings, meetings...
  • Objects: a myriad of items...
  • Animals and plants: common and scientific names...
  • Anatomy and attributes of people, animals and plants: arms, legs, adults, leaves, trunks, paws, tails...
  • Depicted text shown in images - often signs or writing shown in images...

Many of these generic types can also have proper named instances:

  • Proper names of people, places, buildings, topography, organisations, animals etc

When dealing with depicted content I've found some of the biggest issues to be:

  • Identification - knowing what is in an image
  • Focus and specificity - knowing what to include and what to exclude
  • Consistency - applying the same term in the same way for the same depicted content

Identification - knowing what is in an image

Depicted content is a relatively black and white area - a dog is depicted so a dog is tagged. However, it might sound a little weird, but working out what is actually in an image can be a lot harder than you think.

Take a look at the image "Do You Know What This Is?" by Sister72

This depicted content is fairly simple to see, but understanding what you're looking at is not that easy. Even if you know roughly what you're looking at, do you know what it's actually called?

One tip is to group similar images together when you're classifying them. Also, always start by assembling as much information as possible before you begin to classify images. It is especially important to gather together the information you have from the creator or custodians of the images.

Also important, when you have the luxury, is to get the image creator to add key metadata about the image at the point of creation, or soon after.

Focus and specificity

Knowing what to include and what to exclude, what to mention and what to ignore, is also much harder than it sounds.

Firstly, some image users will want a piece of depicted content tagged whenever it appears in an image, others will only want it tagged when the image shows a very good representation of that content, and of course many people will want something in between the two extremes.

Different users have different requirements. You need to understand the domain in which you're working and see the classification of depicted image content as supporting the needs of your users.

For example, Would you tag everything in this 'Messy Room' image?

What would you miss out and why?

Looking at the image of "Mountain Goats", from Thorne Enterprises

Would you tag this with goats as well as mountains? Would this be helpful?

Let's look at four images depicting windows:

'Window to the World'?,

'Portuguese Window'?, '

What Light Through Yonder Window Breaks'?



Looking at these, it soon becomes clear that even deciding to apply a simple term like 'Windows' is not always easy.

Would you apply 'Windows' to the image of the cat looking out of the window? Is a window actually depicted in that image? If the image wasn't tagged with 'Windows' how else would anyone find an image of a cat looking out of a window?

The other three images show windows as parts of buildings. but is a building always depicted? Deciding when to apply a building type or the name of a building can be hard. Should you do this every time a part of a building is shown? Only when the whole building is shown? When enough of the building is visible? Or when a section of the building that to most people would represent the build is visible? For example, what part of the Empire State Building would you consider to depict that building? Rarely does anyone see it all - how much is enough? Would you treat the images of windows in a similar way and classify them all with a building type of 'Houses', or would you ignore the structure and focus on the parts - the window, the roof?


Achieving consistent application of terms to images revolves partly around clear term definitions, well defined application rules and guidelines, and a robust quality assurance process.

Term definitions are very important. Defining the meaning of a term, and ensuring the people choosing which term to assign understand that meaning, can be crucial to term application. For example, creating a term such as 'Bow' without defining its meaning is not going to make it easy to apply.

Application rules that are well considered, thorough and clear are also very useful. Even a simple concept often needs some form of guidance linked to it. I remember a while ago needing two terms, 'Indoors' and 'Outdoors' to allow users to find images of people who were outside and inside - a simple concept you might think, one that people often need, and one that's easy to apply - who'd need guidelines for that? However, it soon became clear that guidelines were needed after I received a series of interesting questions: Is being on a train indoors? Should studio shots always be considered indoors? Does every shot of a person have to have indoors or outdoors assigned to it? If not, when should this term be used and when not? Is this a focus issue? If so, how much of a location needs to be seen before Indoors or Outdoors is used. A clear set of application guidelines followed an interesting meeting!

Strong quality assurance processes are very valuable. People make mistakes and images generate interesting issues. Appointing staff to review a percentage of classification work based on clear guidelines, and then sharing findings with the people who assigned the terms to the images, is an important way of assessing how well the image classification is progressing and keeping a classification team synchronised.

Today I’ve talked a lot about content depicted in images, next time I’ll focus on abstract concepts which are related to an images ‘aboutness’.

Content Based Image Retrieval - Google and Similar Image Search

I was very interested to see Google experimenting with visual similarity in still images, what I usually call Content Based Image Retrieval or CBIR.

Google Labs have just launched an image search function based on visual similarity - Google Similar Images. This new offering allows searchers to start with an initial image and then find other images that look like their example picture.

I've been reviewing these type of systems on and off since the early '90s. They've always offered much, but I never saw any evidence that the delivery matched the hype.

I've always found that using pictures instead of text to find images works best on simple 2d images: carpet patterns, trademarks, simple shapes, colours and textures. Finding objects in images was always a struggle, and looking for abstract concepts: fear, excitement, gloom, isolation, solitude.. was never been more than a vague possibility. Over the years a lot of work has been done in this area, and the search results I've seen have started to improve, but this technology is still young, and in my personal opinion still rarely delivers what most users want, need and expect.

Looking at Google Similar Images, I wonder how much of the back-end is pure content based image retrieval (CBIR), how much is using metadata in some way, and how the two are interacting? One thing that appears to be helping to often show a tight first page of results, is simply pulling the same image from different sites. I also noticed that the 'similar images' option is not available for all images - which makes me wonder why? Have some images been processed in ways that others haven't?

Google Similar Image - 5

Diving right into the experience, I entered a query for a place in the UK and didn't see any image results with the 'Similar Images' option. I wonder whether this is to do with the presence of the results on UK websites?




I persevered, and found some interesting images and got some interesting results.

Google Similar Images 1 - beachI started with a fairly standard image of a beach scene, always a favourite with testers. As you can see I got a pretty good first screen back. However, the 5th and 6th image on the top row show no sea or beach, neither do the first three images on the second row.


Google Image Search 2 - Pole

I moved on to an image of what looks like equipment at the top of a pole.

The results were much more mixed: studio shots of objects, fighting people, trucks etc. No images were returned that I would consider similar to the example picture.

Google Similar Images 3 - clock face

Interesting results came from a similarity query on a clock face.  A couple of the first results hit the mark, then the results set degenerated into image similarity based more on the colour and the black background than anything else.


Google Similar Images 4 - roadMy last attempt, before morning coffee called, was an image of a country road. I was hoping that the clear roadway might produce a pretty precise results set. However, I was a little disappointed by what I saw.

The first results page only produced one vague road on the bottom row, with most of the similarity seemingly related to colours instead of objects.

From my less than scientific dip into this Google Labs offering, it looks like the highlighted images on the Google Similar Images home page produce good results - better results than I've seen other systems come up with. Many other image queries are sure to also produce results which may well impress. However, many of the results I saw did not match the initial level of accuracy I saw from the highlighted home page pictures.

I don't want to be picky, this is still a prototype after all, and well done to Google for introducing a wider audience to this type of image search. Hopefully, after more work, the results will increasingly make more sense to people, the access points offered to depicted content and conceptual aboutness will improve and more images will be more findable for more people.

Until that time, visual search without text will help with image findability, but text, metadata, and controlled vocabulary applied to images by people is for me still king, and will continue to offer the widest and deepest access to images for a long time to come.




Taxonomy is key to Effective ECM

I recently attended a seminar on the 10 Steps to Business Efficiency with Content, Collaboration and Process given by the good people at AIIM ( all about ECM strategies and best practices. This was a free seminar, well organized and well attended by a broad spectrum of representatives from all types of organizations, large and small, new and old industries. The topics of discussion too ranged from the most effective way to digitize archival assets; to applications to better allow for federated search across various data repositories; and then there was certainly a lot of discussion around what has become the most ubiquitous of ECM type applications, Microsoft SharePoint.

There were of course the usual quotes and statistics from AIIM, Forrester and Gartner regarding information proliferation and management today: The amount of data being produced is doubling every 18 months; 80% of this data is unstructured and 90% of that is entirely un-managed.

An interesting quote that I will paraphrase here was attributed to Thomas Washington , "The pursuit of knowledge in an age of information overload is less about the process of acquisition than it is about a proficiency of tossing things out." And regarding the storage of all of this information another interesting fact was thrown out: while 1 GB of storage may now cost an average of 20 cents, it costs $3,500 to review that same 1 GB of data and start to make sense of it in the context of your business. (AIIM)

As I listened to the various presentations and vendors I was struck by one thing: none seemed to offer a unified solution for using taxonomy more effectively to structure, classify and categorize the content that was going into these vast data repositories. Certainly it was agreed that there was value to such a process, but it is something that many organizations have still not recognized as absolutely necessary to fundamentally improve the tagging, organization and discovery of information within these huge libraries of data, documents, and other media.

It is our opinion that the integrated use of taxonomy applied to ECM applications, as well as across the rest of the enterprise, using a centralized and standardized set of vocabularies for navigation, search, discovery, meta-tagging and many other applications is a necessity in moving towards a unified means of data normalization and discoverability. To achieve this we offer services to get companies started as well as tools like Synaptica with out-of-the-box integrations to tools like SharePoint, but also more generic means of integrating with external applications via simple APIs and Web Services.

As the proliferation of data only increases over time and the means of digitizing archival records or utilizing native electronic formats becomes more efficient, storage becomes less a matter of cost and more a matter of management. The efficient means of identifying, tagging, categorizing and sorting information will be key to the effective operation of any organization.

A couple months back, my colleague also wrote up the 10 Rules of Successful ECM Implementation after attending an AAIM seminar that we have found quite useful in talking to business and technology owners about content access strategies.

We see many of our customers at the forefront of addressing these issues and working with them, we continue to work towards providing better and easier ways for data managers and end users alike to find what they are looking for. We look forward to sharing some of these use cases as well as hear from you on your successes and struggles!

Image| Flickr | ul Marqa