Synaptica Announces SharePoint Integration

Our Synaptica product enhancement strategy is to continuously develop useful and innovative ways for our clients to use Synaptica for their taxonomy and metadata management needs. So it wasn't a surprise when some of our clients asked us to provide an 'out-of-the-box' integration point into SharePoint- we know first hand about the issues with managing taxonomies in Sharepoint from our own internal experiences as well as multiple client engagements over the last few years.

Microsoft SharePoint has over one-hundred million licenses in place and its adoption continues to grow globally. In 2007 an IDC survey of 300 companies found 61% were deploying SharePoint enterprise-wide, and that 28% of those using SharePoint in specific departments were expected to expand usage to the enterprise within the next 12 months- and a year later things don't seem to be slowing down.

With that kind of adoption and penetration across so many industries, it is impossible to ignore the impact that SharePoint is having as a portal for information and document sharing both internally and externally to the enterprise. As a result, Synaptica is proud to announce an integration with SharePoint that addresses some of the known pain points that users have when trying to successfully use taxonomies within SharePoint to tag, search and discover documents and other content.

In this short video overview we take you through the core elements of our Synaptica: SharePoint Integration :

View Video directly

With this Synaptica integration you can:

1) Import a complete vocabulary into SharePoint as a list: This feature provides for the import, and update, of a vocabulary (taxonomy, thesaurus, authority file, etc.) into SharePoint creating a new list which may then be applied as a column to be linked to content within a document library. As the vocabulary is updated within Synaptica, one may update the list stored in SharePoint to make sure that the most current information is being stored and applied as metadata to documents and content.

2) Provide Dynamic access to Synaptica allowing users to tag content : Employing Web Services this feature allows SharePoint to access a Synaptica system through the use of either a keyword search, or a navigable "tree browse" to allow users to find and locate specific terms and apply them as metadata. This dynamic access makes sure that SharePoint users are employing standardized terminology to tag content, where at the same time these vocabularies may be used across the enterprise and in other applications.

3) Provide Dynamic access to Synaptica for search and discovery: A SharePoint Web Part allows users to search or browse real-time through Synaptica vocabularies - using the same terms that have been applied to tag the content. This feature also can "direct" users to the proper terminology, as opposed to their having to guess at how a piece of content might have been tagged using an uncontrolled, free-text method.

With this initial iteration of the Synaptica: SharePoint integration, we hope to solve some of the biggest problems we hear about with users trying to better organize, tag and discover content within a SharePoint portal. We will be looking at expanding the integration over time and adding improved features as we learn more about how we can assist our customers and SharePoint users with these integral tasks.

For more information about this new integration and to see if you qualify for a free two week trial of Synaptica with the SharePoint Integration please contact daniela.barbosa@dowjones.com or use this Contact us form to submit your details.

10 Rules of Successful ECM Implementation

Last week I attended AIIM’s ECM seminar on Automating Document-centric Processes – Is SharePoint Enough?href> It was a really interesting and informative event, with a few general sessions, several presentations of case studies, and product demonstrations from various vendors in the ECM realm.

AIIM President John Mancinihref> closed the seminar with his 10 rules for successful ECM implementation:

  1. Build a strategy.
    When implementing an ECM solution, winging it is a bad idea. Especially if you are implementing a solution as viral as SharePoint, you should have a well-defined strategy. You should define business requirements, think about governance, analyze content systems, and identify points of integration. Formulating a strategy will save money and increase the likelihood of a successful project.
  2. Not all content is alike.
    You should think about the nature of the content you are trying to manage. Is it office-based content, transactional content, or persuasive/creative content? You need to pick a solution that matches your content.
  3. Prepare for eDiscovery.
    Sector-based regulations aren’t just a flash in the pan. Just because your business hasn’t had to deal with eDiscovery yet doesn’t mean you won’t have to in the future.
  4. Good enough is better than nothing.
    Doing something to get your content under control is better than doing nothing at all. You don’t have to start with the perfect solution.
  5. Ripping out and replacing is not usually a good starting point.
    This is especially true for more mature ECM organizations. If you have multiple repositories, you have to deal with them and think about policy structure around the information. Think about how you can provide access to information in those various repositories. Look for a vendor who will help with the integration challenge.
  6. Acknowledge the reality that this is a hybrid world.
    Paper is still part of the equation. Although we would like for everything to be digital, that is not the reality. Don’t get hung up on wanting everything to be digital—sometimes digitizing information can be too resource-intensive and unnecessary. Evaluate your strategy.
  7. Be militant about ROI and deployment times when thinking about projects.
  8. Consider alternate delivery models in your ECM approach.
    There will possibly be fewer IT people in the near future because of the economy. Consider hosted solutions as away to lower risk for management.
  9. Spend some time on standardizing the front-end of your processes.
    Consider things such as are you figuring out how to digitize things that should have been digital to begin with?
  10. Once you have something digital keep it that way.
    Why have a digital process all the way until you have to sign a document? Rather than moving from digital to analog and back to digital, consider processes that will keep content digital.

I found this list to be very relevant to some of the work I've been doing lately. Often I talk to clients who are implementing an ECM solution, but they haven't formulated a clear strategy yet. Organizations usually have content stored in several repositories, and employees don't know how to access that information, assuming they even know it exists. That's why we suggest an assessment prior to implementing a new solution. An assessmenthref> can be conducted internally if the resources are available, or our Taxonomy Services teamhref> can perform one for you. An assessment will help you identify your various content repositories and develop a strategy to access that siloed information.

Synaptica Has Got Its Head in the Clouds

The way companies are using software has been shifting- and if your head hasn't been in the clouds over the last few years i am sure you have noticed the shift to SaaS (Software as a Service) offerings and more services moving to the "cloud'. From The Economist's recent 14-Page Special Report on Corporate IT titled 'Let it Rise' focused on cloud computing, Microsoft's recent Azure announcement indicating an even bigger investment to moving services to the 'cloud', the recent discussions around Tim O'Reilly's post Web 2.0 and Cloud Computing , and of course discussions about the economics of cloud computing in today's world it is evident that these models- which are not really 'new'- are here to stay.

It is a little known fact- one that i am trying hard to ensure the marketplace knows, but Synaptica is available as a hosted application with complete access to most of all the features that are available (this includes access to robust Web Services). And just like the recent buzz in the marketplace, having access to Synaptica as a 'service' is something that recently we have been getting more and more requests about.

Who has interest in a Taxonomy and Metadata Management tool as a hosted model? Well it is not for everyone who has a need for a tool like ours, but for those who are interested it really varies. For example:

  • Small to Medium, Corporate libraries or Product Manager/Marketing groups who are managing various taxonomies and do not have a lot of IT resources for bringing a tool in-house but can really benefit from a centralized taxonomy management tool that can be accessed via the internet securely by their global colleagues that work on the vocabularies collaboratively
  • Companies that have an urgent need for a tool but don't have the resources to bring it in-house quickly at that specific point and chose a hosted model as a first phase to get their taxonomy development and deployment done
  • Companies that perhaps have an technology architecture that is based on the LAMP Stack that Synaptica at this point can not fit nicely into
  • Start-ups who are building a consumer service that requires a tool to manage their controlled vocabularies (e.g. product categories, navigation taxonomy etc.) but who do not have the IT infrastructure to host an application like Synaptica (e.g. most of their stuff is already in the 'cloud')

So with our hosted model, we can provide at whatever tier a company is at- an affordable and secure way to manage an important part of their business.

And the best part? Well coming in at the low-end, with access to a Synaptica hosted annual license (with full access to all editorial and administrative features including Web Services), you can basically choose to either use one of the premier taxonomy management tools in the marketplace or if you are so inclined- you can instead choose to spruce up your office by buying a Hyacinth Macaw Parrot, or perhaps you can buy one of your employees a nice baby shower gift like this blinged out Baby Pram or even update your office outside picnic patio area with the Kalamazoo Bread Breaker Two Dual-Fuel grill - yes, it really is your choice.

A Project Taxonomy Can Avoid Hours of Frustration

Here at the Synaptica Central Blog most of our posts are focused on developing and managing complex taxonomies which is what our taxonomy consultants are usually doing at client sites during the week...of course unless they are busy blogging here ;-).

There are certainly different levels of complexity depending on the Client, but the business needs are typically robust enough that at one point the customer also looks for a tool to manage those vocabularies and Synaptica fits the bill. Typically this is because of the need to maintain relationship between terms in a thesaurus (like BT (Broader Term), Narrow Term (NT)) that are hard to manage in a spreadsheet or relationships between different vocabularies which many of the thesauri management tools in the marketplace do not allow. They may also have a need to integrate these vocabularies into other systems like search engines, CMS/DMS, DAMs etc. beyond sending excel sheets around their company which can be quite painful.

We have also however seen some pretty cool uses of the tool like Jim's recent post about Thinking Outside of the Synaptica Box about our own in-house usage. Our clients see the power of the tool and adopt it for their own needs- many times bring users into the fold that never thought that they would be creating and maintaining a "taxonomy"!

This post on Project Management from the Developer's Perspective : Project Taxonomy by Stacey Mulcahy on the O'Reilly InsideRIA blog reminds me of some of the unique ways that customers are using the Synaptica tool for.

In her post, Stacey does an awesome job of explaining how "Adopting a project taxonomy is one of the simplest pro-active ways to avoid hours of frustration caused by miscommunication. Once team members, regardless of discipline and role, utilize a shared vocabulary, interactions become more meaningful and ultimately more productive as more time is spent in communicating the message and less time clarifying its context."

Things like Synaptica's "MyWeb Views" allow Admins to quick created 'Read Only' Views for the whole organization to be on the same page- for example with a link to images likes Stacey suggests in her post so everyone gets on the same page as to what a specific term means- for the organization as a whole- or possibly only for that specific project that the team is working on.

It is a must read post- and if you are thinking about the different ways controlled vocabularies are being used in your enterprise and already have Synaptica in house and just want to get others in your organization to benefit from the tool- look at your Project Managers and let them know that you have a tool in house that can simplify the way they manage their taxonomies with their project teams to avoid hours of possible frustration.

Image|Flickr|RACINGMIX

E-commerce, Comercio Electrónico, Commerce en Ligne, Elektronischer Handel ...

According to a recent global survey conducted by The Nielsen Company about trends in online shopping, over 85 percent of the world’s online population has used the Internet to make a purchase.

Finding or not finding products and services on e-commerce sites is key to success regardless of what language an online shop operates in. The conversion rate of a search; i.e. the rate of how many products will actually be bought through searches, is one of the central measures of how successful an e-commerce site is.

The end-user expects an interface that is intuitive and easy to use as well as a navigation and search that directs him or her to relevant products and services. How the user's search terms are actually associated with the "right" search results is of no interest to the online shopper, but is a complex issue that all e-commerce sites and online shops have to deal with.

Having worked with many e-commerce customers in Europe, I have come across a lot of the complexities that optimizing the search capabilities of a site can bring and that an end-user will literally only see the tip of the ice-berg of.

From content, controlled vocabularies, search metrics and process questions that need to be addressed, having the right tools to optimize a search is probably the simplest but no less important question.

Often, search engines focus on what they are made for: Searching. Managing vocabularies for search improvement is usually not one of the areas that vendors specialize in or focus on. The most relevant features we encounter that are often not covered by search engines are:

  • Central management of vocabularies (products, services, colours, materials, and other filters) to ensure that there is one version in place from which extensions can be built if needed
  • Allow for different users to contribute to a controlled vocabulary through different levels of access rights, so for example working directly with content editors to share input
  • The possibility to add comments to terms (why has x been introduced as a synonym to y)
  • Being able to monitor the progress and changes that have been made
  • Being able to retrieve historical information
  • Creating Audience Centric Views
  • just to name but a few!

Next to many other aspects, being able to manage controlled vocabularies in an efficient and effective way is one of the prerequisites to optimize the search capabilities of an e-commerce site. Not only will it help drive online sales, because users will find the most relevant products and services, but it will also contribute to a positive shopping experience so that new shoppers will return.

Image|Flickr|isriya

An Overview of Semantic Technologies at Dow Jones

An overview of how the Dow Jones Enterprise Media Group uses semantic technologies / solutions for our own organization and for customers. This brief presentation was given at MIT to the Cambridge Semantic Web meetup on October 14, 2008.

Classifying Images Part 2: Basic Attributes

Last month i asked the question "What is the Hardest Content to Classify?" and promised additional posts on the subject based on my background of 13 years developing taxonomy and indexing solutions for still images libraries, so I am continuing my thoughts in this post focusing on the basic attributes of image classification.

In my opinion, images are the hardest content items to classify, but luckily for sanities sake not all image classification is equally demanding.

The easiest elements of image classification relate to what I'm going to call image attributes metadata. This area, for me, covers all the metadata about the image files themselves, rather than information describing what is depicted in images and what images are about.

Metadata aspects in this area cover many things and there are also layers to consider:

1, The original object
-- This could a statue, an oil painting, a glass plate negative, a digital original, or a photographic print

2, The second generation images
-- The archive image taken of the original object, plus any further images, cut-down image files, screen sizes, thumbnails, images in different formats, Jpeg, Tiff etc

The first thing to think about is the need to create a fully and useful metadata scheme, capturing everything you need to know to support what you need to do. This may be to support archiving and/or search and retrieval.

Then look at what data you may already have or can obtain. Analyse data for accuracy and completeness and use whatever you can. Look to the new generation of digital cameras to obtain metadata from them. Ask image creators to create basic attribute data at the time of creation.

You'll be interested in the following metadata types:

- Scanner types
- Image processing activities
- Creator names
- Creator dates
- Last modified names
- Last modified dates
- Image sizes and formats
- Creator roles - photographers, artists, sculptures
- Locations of original objects
- Locations at which second generation images were created
- Unique image id numbers and batch numbers
- Secondary image codes that may come from various legacy systems
- Techniques used in the images - grain, blur etc
- Whether the images are part of a series and where they fit in that series
- The type of image - photographic print, glass plate negative, colour images, black and white images

This data really gives you a lot of background on the original and on the various second generation images created during production. Much of this data can either be obtained freely or cheaply, lots of it will be quick and easy to grab and enter into your systems. It should also be objective and easy to check.

My next post will cover dealing with depicted content in images. Please feel free to leave comments or questions on the subject.

Image|Flickr|Daniel Y. Go

Synaptica Central : Dow Jones Video Library


Video might have killed the Radio Star but in today's video streaming world it certainly is helping distribute knowledge and that is why we are publishing a video page to augment our blog postings.

Very often i talk to clients and they are in need of information to learn about key concepts or even just to share a third party view with their colleagues about specific topics around controlled vocabularies that I know someone on the team has presented or written about. It could be for example providing a white paper about Audience Centric Views, a video overview of Taxonomy Management Tools and how to use these tools to collaborate around developing controlled vocabularies or a real life case study of an existing client using Synaptica. In the past, I have kept these references in a .txt file on my desktop that I reference when I need to, but since this blog is being used as a resource for both us internally here at Dow Jones as well as the community, i figured it would be a good time to start a Video Library of our Dow Jones public resources.

So without any further ado- our Dow Jones Video Library has been published.

This is just the start of turning Synaptica Central into a must go to resource for our community, so please watch this space for additional resource pages from recommended white papers, industry standards references, must see videos, must listen to podcasts and must read books!

Have suggestions of things we should make sure we add to our resource pages? Please leave them in the comments or drop me a note at daniela.barbosa@dowjones.com

Image|Flickr|traed mawr

In Developing a Custom Taxonomy Only Time Can Tell

OK Quick Monday Quiz: How Many Minutes Does It Take to Create a Category (aka term, node, leaf, etc)???

I suspect that anyone who has worked on developing a taxonomy has heard this question or a variation of it. It seems like we get it daily! Once a client decides they need or want a taxonomy – they need or want it immediately so figuring out when becomes the next question.

After almost 30 years of being involved in the development of controlled vocabularies, thesauri and taxonomies I should be able to say it takes X minutes per term but I’m still forced to tell clients that it will depend on a number of things that are usually covered in the Assessment Phase of any engagement like:

• What is the topic of the taxonomy?
• What is its intended purpose?
• What systems will you use to develop and maintain it?

Once we’ve answered all these questions, the next one is frequently whether they could just use a taxonomy that is already developed. No matter what approach is ultimately chosen to create a taxonomy – it still takes time and the ultimate answer is that it depends on what the client needs, how many terms there will be, how technical those terms are and the taxonomy development tool that is being used.

Building a taxonomy for an area that you are familiar with can be done fairly quickly while building one on scientific, technical or medical areas might be much slower. Adding to the issue of the topic is the issue of the tool where the taxonomy is being built. The more efficient the tool the faster the development once terms have been decided upon and research for the terms completed.

Experience in developing taxonomies has given me some general metrics that can be used for pricing a taxonomy but the reality is that the best answer is that it all depends on what is needed.

So – how long does it take?? – it takes as long as necessary!!

Image|Flickr|h.
h.koppdelaney

Taxonomies are a Commodity

For some reason or another (lots of travel, several hats at home and work) I've had trouble finalizing this post. Earlier today though, I read Paul Miller's latest post on ZDNet. There seems to be some discussion about whether or not data is a commodity. I think there IS most definitely data that are a commodity.

Taxonomies are a valuable raw material in the management of information. A file that can be bought and sold and used to improve services. They can be generated by humans, machines, or even better: humans working with machines. Monkey Chewed Coffee Beans 4Many taxonomies are a dime a dozen, with little to differentiate between versions of the same data. Some are like Kopi Luwak coffee - rare and extremely valuable. The word "taxonomy" is itself suffering from a kind of genericide. Classical definitions still apply: taxonomies have become commoditized.

The complexity of the controlled vocabulary will determine its value to a degree. A simple pick list should be easy and cheap to acquire - a list of countries, for example. Or colors, seasons, months - you get the idea. What is the value of a list of industries? Or companies? Maintenance is the primary cost factor - frequent changes require frequent updates, but an authority file in and of itself is not that complex. A broad and deep poly-hierarchical taxonomy I would expect to have more value. A poly-hierarchical taxonomy is one where a term in the taxonomy can have more than one parent term. Managing these relationships takes more time. An ontology - well, those aren't quite commodities yet, but they will get there. Why? Because they still require a great deal of thought and effort.

The source of the data will also help determine its value. Data from trusted sources - for whom integrity is paramount - should be valued higher. Is the data accurate? Is it maintained? Is it in a usable format? Does it have high availability? (Many quality vendors can be found at TaxonomyWarehouse.com.)

The uniqueness of the taxonomy will drive its value. Like our coffee example above, a taxonomy as ubiquitous as Starbucks will not be as valuable as say a pharmaceutical research vocabulary. Given the, uh, processes needed to produce Kopi Luwak, it is rare and therefore fetches a higher price, as would our R&D taxonomy.

The information security concerns also impact value. Our pharmaceutical company, or a financial services provider, is not about to release it's vocabulary into the wild. It is a significant intellectual asset that merits a substantial IT effort to protect.

I actually like the fact that taxonomies have become commoditized. Why? Competition drives improvement - in quality, in focus, in security and in usability. These are areas that the semantic web community needs to focus on - in my experience, security and usability need attention NOW. Good fences make good neighbors, and when we've got good fences, we can make more links and learn to trust. Icing on the cake!

Flickr image by INeedCoffee