Anonymous — October 26, 2008 - 2:42pm
According to a recent global survey conducted by The Nielsen Company about trends in online shopping, over 85 percent of the world’s online population has used the Internet to make a purchase.
Finding or not finding products and services on e-commerce sites is key to success regardless of what language an online shop operates in. The conversion rate of a search; i.e. the rate of how many products will actually be bought through searches, is one of the central measures of how successful an e-commerce site is.
The end-user expects an interface that is intuitive and easy to use as well as a navigation and search that directs him or her to relevant products and services. How the user's search terms are actually associated with the "right" search results is of no interest to the online shopper, but is a complex issue that all e-commerce sites and online shops have to deal with.
Having worked with many e-commerce customers in Europe, I have come across a lot of the complexities that optimizing the search capabilities of a site can bring and that an end-user will literally only see the tip of the ice-berg of.
From content, controlled vocabularies, search metrics and process questions that need to be addressed, having the right tools to optimize a search is probably the simplest but no less important question.
Often, search engines focus on what they are made for: Searching. Managing vocabularies for search improvement is usually not one of the areas that vendors specialize in or focus on. The most relevant features we encounter that are often not covered by search engines are:
- Central management of vocabularies (products, services, colours, materials, and other filters) to ensure that there is one version in place from which extensions can be built if needed
- Allow for different users to contribute to a controlled vocabulary through different levels of access rights, so for example working directly with content editors to share input
- The possibility to add comments to terms (why has x been introduced as a synonym to y)
- Being able to monitor the progress and changes that have been made
- Being able to retrieve historical information
- Creating Audience Centric Views
- just to name but a few!
Next to many other aspects, being able to manage controlled vocabularies in an efficient and effective way is one of the prerequisites to optimize the search capabilities of an e-commerce site. Not only will it help drive online sales, because users will find the most relevant products and services, but it will also contribute to a positive shopping experience so that new shoppers will return.
Anonymous — September 25, 2008 - 3:51pm
This morning I attended the Taxonomy Bootcamp and KMWorld Joint Keynote by Peter Morville on "Connecting Knowledge Management and Discovery: Search 3.0" . Peter delivered an engaging overview of many aspects that are key to successful Knowledge Management and Discovery. Some of the points that were covered included:
- Good search and discovery being achieved through collaboration of people with different skills and an appreciation of Information Architecture focusing on business goals as well as user needs
- For website design it is critically important from the point of view of findability to have multiple paths to information such as alphabetical indexes, search engines, topical schemes and site maps due to users looking for information for different reasons and having different approaches to finding that information
- Information Architecture and website design is linked to a honeycomb of different qualities. A site needs to be useful, valuable, desirable, usable, findable, accessible and credible. These qualities are all interactive and interdependent
- The relationship between search and Knowledge Management is very important. Good quality content will be used and found, which encourages maintenance of the quality of this content
- When developing portals, Information Architects need to think about taxonomies and vocabularies. Content is more dynamic these days and we need to look at work done in both the collaboration and 2.0 space. A critical component of portals is Enterprise search. This needs federated search solutions that bridge the gaps between all repositories, including external websites and databases
- Any architect (physical or digital) needs to have one foot in the past and one in the future. We need to learned lessons from the past, but at the same time we need to understand that systems will be used into years in the future and will become the legacy systems of the future.
- One interesting concept that Peter talked about was that of the disciplines of way finding (finding our way in the physical world) and information retrieval are converging. Examples of this are Google World and GPS devices that help to converge mobile devices with location awareness. But just because we can do this, do we really want to?
- People are becoming findable objects as well as other things. It will probably be about 30 years before the Internet of objects is fully realised via technologies such as RFID. This technology can help in many aspects, the example given was that of Cisco allowing the tagging and locating of high value objects such as wheelchairs left in rooms in hospitals. These technologies will help with costs and customer service
A balance needs to be found with the web 2.0 movement, but we shouldn’t throw away ideas of Information Architecture and vocabulary development. In 10 years time we are still going to be using a search box. This means we will still need taxonomies to provide options for browsing navigation and filtering. Search and browsing will continue to work hand in hand.
The process of search is iterative and interactive and over the course of a search a query can evolve. Search is also one of the most important ways in which we learn. We need to recognise it is a complex adaptive system. It is not just about the interface or the user. We need to know how to get systems to work together, remove outdated content and design interfaces to help users for when they get stuck. Narrow down results etc.
Three key questions when redeveloping a site are:
- Can users find our website?
- Can users find their way around our website?
- Can they find information and their way around the site DESPITE the website?
Design Patterns used in website creation:
- Best Bets – Opportunity to query disambiguation
- Federated Search – Searching across multiple database and locations. Users often don’t know which database to search in
- Faceted Navigation - Bringing search and browsing together and leveraging taxonomies and vocabularies. Need to take a decision on whether to push navigation to users or show it in a more subtle way
Ultimately we need to expand what we think about as search. For example Google Books dramatically expands what we think about the searchable internet. Other examples are the searching of video and podcasts through sites such as Everyzing.
There are lots of possible futures for search. User experience design helps to identify future concepts. Search is a wicked problem. The only way to move forward is by sharing and working together.
Anonymous — September 24, 2008 - 9:34am
Sue Feldman has been a key analyst and researcher in the search space for a number of years. Her work at IDC as Vice President of Research, Search and Digital Marketplace is very highly regarded. (Sometimes I envy her job!) It's Wednesday morning in sunny San Jose, and Sue has just given the morning's keynote at the Enterprise Search Summit West.
Sue believes that we are seeing a convergence of tools in search, and thankfully the vendors are seeing a stronger market, which will motivate them to keep innovating. The future of search is not a platform based on transactions, as we have today. It will be a language-based foundation for a new platform - a knowledge platform that she predicts will gain equal place with transaction-based systems. The similarities with the evolution of the database platforms imply a parallel path.
We will continue to see development in categorization, text analytics and linguistic modules. This includes capabilities for identifying parts of speech; extracting entities, concepts, relationships, sentiment and geo-location; semantic understanding via dictionaries and taxonomies; support for multiple languages. One of the biggest problems Sue thinks we need to solve for in the search market is something taught to every library school student: selection. There is so much information, from so many sources - what can you trust? What are the valuable sources?
What are the market drivers? New business requirements. Updated business requirements. Connect the right information to the right people at the right time. We've heard that for years, and while it may be annoying, it's still valid! Determining the state of the business despite the data being in separate silos. Compliance - governments create and change those requirements regularly! Controlling costs - think information workers and call centers; the faster a service rep can find information and finish the call, the lower the per call cost and the more calls they can take, translating to happier, more loyal customers and the next driver: increased revenue. In keeping with "Web 2.0" a better understanding of customers and improved communications with them are a key driver.
eCommerce will require ever more sophisticated tools in the search digital media space, as will publishers as they continue to migrate online. The digital marketplace and government (DoD, NIH) have been early investors in this space - for ad matching, interaction improvements, rich media search, fraud and terrorist detection, access to information and more. The market, according to Sue, is realizing that they have tons of information NOT in their ERP and CRM systems. Transaction-based computing is no longer enough. User-centered computing requires re-thinking, new human-computer interaction models.
Sue believes we need to automate knowledge work, as we no longer are limited to working 40 hours a week in our offices - we work in bits, here and there, 24/7 on multiple devices in many formats. We need to have personalized interaction models - and even more granular that just at the individual level, at the person's role level - employee, volunteer, family, friend. The personalization needs to address the user, the device, and the context. It needs to be flexible and adaptable, ad hoc in real time. It needs to be secure and contiguous across user environments.
The challenges for search are:
- How to unify access to kinds of information from a single, contextual user interface
- Improving human-computer interaction models
- Identifying what is good in interaction design for information access
Sue believes that she will not have a market to forecast in 10 years. By then search will be embedded in the platform and in the applications to provide interaction. Applications will use this search platform to personalize, filter and visualize. We will see task-specific applications in our work environments. In fact, some of these applications are already on the market. Search will be at the center of interactive computing, as search is now language-based, just as humans.
Anonymous — September 23, 2008 - 2:51pm
Like Daniela, I too am attending the 2008 KMWorld/Enterprise Search Summit West/Taxonomy Boot Camp meta-conference in San Jose. Shortly before lunch, we heard from Gary Szukalski, Vice President, Customer Relations, Autonomy. He spoke about Meaning-Based Computing. I've known Gary for a number of years and he did not disappoint - his message becomes more refined each time I see him speak.
Gary spoke of a "major paradigm shift" in the the IT industry. For years, we (IT practitioners and vendors) have been forced, unnaturally, to aggregate, dumb-down, and structure the mess of unstructured data that makes up approximately 80% of an organization's information assets. Why have we done this? Because that's how computers work - they need structure. We are moving into a world where we can stop forcing structure onto data as computers will understand the semantics of what they are storing and indexing.
Now, he didn't say semantic web or semantic technologies. :) He talked about meaning - how do we teach our machines to disambiguate terms. He gave an Enron example - "shred" means destroy paper documents, but also refers to slicing vegetables in the Enron corpus. It also is a snowboarding reference. How does the machine know? This is where Autonomy is heading.
Why would we care? Gary spoke of the December 2007 amendments to the Federal Rules of Civil Procedure. In a nutshell, these amendments made all relevant electronic information admissible in a legal case. There are definite ROI measures to be had for using the right discovery tools to protect organizations from legal troubles. This brought to my mind the Sedona Principles as well - legal guidelines regarding the importance of metadata.
Pan-enterprise search is the new buzzword. Rather than aggregating - federating - sources together, a search tool should now be able to index ALL objects, regardless of file or storage type. Glad to hear a top ES vendor saying that finally!
Now, I was a big Verity customer/user at a prior employer. I gave them a great deal of feedback on their tools. One thing that always gnawed at me, born from my library roots, was that the definitions of the categories and topics that improved search relevance were locked in the tools. My organization defined them, but we couldn't share them easily - only the evidence of their existence, by means of better search results and faceted browsing. But the critical thing about "meaning" is that it be shared! In the "shred" example above, I understood fully it's importance in the Enron context. But my first thought on hearing the word was cooking, while the woman next to me thought of snowboarding. How does an organization use the power of the tool to educate the users of the tool? Who is working on the UI part of this paradigm shift? And who is thinking about the UI in the context of information security? Secure search should provide access at the role, group, organization or public level. Is Autonomy using open standards to minimize efforts at integrating metadata pan-enterprise? For me pan-enterprise is not just behind the firewall, it extends onto the web in the form of corporate messaging and consumer feedback. Are any of the enterprise search vendors using open methods to allow this kind of integration? I'm interested in hearing, as I left the search world behind a couple of years ago, and have drifted towards the outer edges of the space.
This was one of the better presentations this morning, and I hope they post the slides somewhere soon!
Anonymous — September 23, 2008 - 10:00am
I am at the opening day Keynote for Enterprise Search Summit West in San Jose today, rushing down from Pacifica on this beautiful morning driving (ok speeding) down 280 to make this early morning session. Obviously if you have been following me for a while over on my blog you know i have a 'thing' for social tagging and recently published an eBook on Hybrid approaches to Folksonomies and Taxonomies in the Enterprise so i did not want to miss it.
The Keynote is titled 'Tag, You're It: Social Tagging Strategies for the Enterprise' and is being lead by Gene Smith, Principal, nForm Experience. Gene is the author of the book 'Tagging People Powered Metadata for the Social Web'
Why We're Here? (at the conference)
To figure out how to find *the good* Stuff
19th century explosion in paper records- flourishing of patten filings to store records and information. the one that emerged as the winner was vertical filing. folders and tabs where a key piece. Tabs in vertical filing are still seen in today's web User Interfaces.
Folders have been the dominant organizing principle - then links came into the scene.
Instead of Information explosion- think of it as a stream, immersion in the flow.
the challenge is keeping track and finding what we need later on - tags are
A tag (word) can mean a lot of different things.
Looking at different tools and why they are interesting:
Zigtag - semantic social bookmarking
When you are about to tag something, you type and pick from the list and it includes definition.
They have million of concepts- they mine public data sources for user generated content and built a inference engine to provide the concepts
any person can make any two tags equivalent- but they can also remove it as well - "humor" and" "humour"- same word but different meaning in different cultures (america vs. UK) authors tagged to each are different.
Value chain of the LibraryThing features
>combine tags> tag mash search>tagsonomies (mapped to existing categories)
The big problem is getting people to use the tools you provide for them!
- creating incentives- reward a person by identifying that that were the first person to tag or create social proof 'feature linker'- (who doesn't like to see their name in lights?)
- try to pre-populate the tag box- tags other people have used
Some other examples:
Wesabe - sticky tags- always applied to the item, but then allow 'not sticky' or one time tags. show you your spending habits by clustering your tags- giving benefits of the tags they used.
Dogear - built internally at IBM- architected it so it produces a RSS feed for every tag - what happened is that as people started using it- groups found interesting things to go with their RSS feeds like displaying the content into other environments- creating mashups- allowing innovation on the tags so that the value is created by the users needs.