An overview of the NFAIS 2015 Annual Conference: Anticipating Demand: The User Experience as Driver
This paper provides an overview of the highlights of the 2015 NFAIS Annual Conference, Anticipating Demand: The User Experience as Driver, that was held in Crystal City, VA from February 22–24, 2015. The phrase “Content is King” can arouse hot debates over its relevance in a technology-dominated world. But the reality is that searchers are looking for answers to their questions. Information remains the Holy Grail. But the convergence of content, robust computing technologies and innovative devices has changed the search experience and elevated user expectations. The goal of the NFAIS conference was to take a look at how content providers and librarians are building products and services that will not only provide the ultimate in today’s information experience, but will also raise that experience to a new level of satisfaction. Conference attendees used their smart phones throughout the meeting to give their opinions on key issues that were raised and the questions/responses are included at the end of each session.
1.Introduction
The term “Use Experience” is now part of information industry jargon. It even has its own abbreviation – UX. But what is it? According to Wikipedia, the term began to be visibly used in the mid-1990s [8] and it is defined as involving a person’s behaviors, attitudes and emotions about using a particular product, system or service. There is even an ISO Standard on it (ISO-9241-210) which defines the term as “a person’s perceptions and responses that result from the use or anticipated use of a product, system or service”. It includes all of the users’ emotions, beliefs, preferences, perceptions, physical and psychological responses, behaviors and accomplishments that occur before, during and after use. You can find a list of other apt definitions at http://www.allaboutux.org/ux-definitions – all pretty much in agreement.
Why is it important? Because whoever provides the best User Experience ultimately captures the user’s loyalty. Once captured, it is very difficult for anyone to lure them away – who wants to learn another system? That lesson was learned during the early online days of Dialog, BRS and SDC. Time constraints alone make it unlikely that students, faculty and researchers will shop around once they find a system that meets their needs.
Who owns the user today? No doubt in my mind that it is Google. When attendees of this conference were asked what has the greatest impact on user expectations of research information services, sixty-eight percent said the products and services released by Google, Apple, Amazon or Microsoft. In fact, several speakers noted Google’s dominance during their presentations. Kalev Leetaru, Founder of GDELT and Adjunct Faculty at Georgetown University, noted in his opening keynote that Google has become part of the fabric of our lives. It has become the go-to service because it has created a “frictionless” interface for searching the world. Kate Lawrence of EBSCO encouraged publishers and librarians to accept the influence of Google and Wikipedia and design for innovation that is rooted in familiarity. Mark Jacobs of Delta Think said that Google and Amazon have set the bar for search. And Adam Wade of Microsoft referred to a study by the Queensland University of Technology on the information-seeking behavior of graduate students that showed that sixty-four percent of the students used Google as their starting point [4]. While Google was not the theme of the conference it became apparent that it provides the User Experience that is the Gold Standard to which all others are compared. But read on, there’s lot more to learn.
2.Setting the stage
The opening keynote was given by Kalev Leetaru, Founder of the GDELT Project and Past Fellow and Adjunct Faculty at Georgetown University. The title of his presentation was, “The user of the future: Reimagining how we think about information”. He noted that from the very beginning of society mankind has attempted to document information – from cave paintings, hieroglyphics and hand-scribed manuscripts, to books, journals and now the Internet. The amount of information being created in today’s digital world is astounding and he provided a number of interesting statistics as follows: One third of the world’s population is online; there are as many cell phones as there are people on earth; Facebook (as of late last year) had two hundred and forty billion photographs representing thirty-five percent of all online photos, as well as one billion members with one trillion connections! Every year there are 6.1 trillion text messages, one hundred and seven trillion e-mails, and 1.6 million days’ worth of video are uploaded to YouTube. While every day on Facebook alone five billion new items are added, three hundred million photos are posted, and five hundred Terabytes of new data about society’s innermost thoughts are posted. If that isn’t enough to boggle the mind, he commented that there are as many words posted to Twitter every day as the entire New York Times has printed in the last half century! Up until 1994 we pretty much depended on the printed newspaper or radio and TV broadcasts (at specific times per day) to get news. By 2010, forty-five percent of the world’s news was being transmitted in real-time via the Internet. He then raised the question that he deals with every day – What do we do with all of this data and how can it be used to uncover more information about the world.
He first made an observation about the search process and how not much has changed from the user’s perspective since online systems such as Dialog were launched almost half a century ago. The information-seeker will sit down, usually in front of a PC, figure out what keywords to use, and enter them into a search box – the same box that was used fifty years ago! What has changed is that searching, especially via Google, has become far more powerful and has been integrated into the fabric of our lives. Google is used daily for e-mail, calendars, maps, document sharing and storage, etc. It knows our preferences, it knows where we are when we perform our searches, and it has enormous power behind the scenes. Google is far more powerful than when it first came on the scene, but the search process remains simple to the user – all of the bells and whistles are hidden away under the hood, not incorporated into the search interface so as to confound and confuse the user. Kalev described it as a “frictionless” search process. The experience is pleasurable and seamless.
His second observation was the need for content providers to understand their users and he gave the example of Facebook and LinkedIn. Both are highly-successful social media, but they focus on different markets and have a deep understanding of the needs and behaviors of those markets. They build services for their target audience and really do not compete.
His third observation may be the most important – and that is the need for content providers to allow users to take their content and build on it. He encouraged them to look at their digital content as a valuable archive, not a library. With libraries the material is checked out and read at home – the content travels to analysis. With archives the user physically travels to the reading room, analyzes the material there, and takes only their notes when they leave – analysis travels to content. Kalev used this latter model when working on the Internet Archive Reading Room project and he suggested it as method for offering “legal” data mining as a new revenue stream [6].
Kalev then briefly described the GDELT Project which he founded. It is the largest and most comprehensive open database of human society ever created. It is a real-time monitor of global news media (print, broadcast and web formats) in more than one hundred languages that stretches back to January 1, 1979 through present day and is updated daily. His vision is to “leverage this data to construct a catalog of human societal-scale behavior and beliefs across all countries of the world, connecting every person, organization, location, count, theme, news source, and event across the planet into a single massive network that captures what’s happening around the world, what its context is and who’s involved, and how the world is feeling about it, every single day” (see http://gdeltproject.org/about.html#creation). The content can be freely-used and distributed as long as attribution is given.
In closing, Kalev encouraged content providers to jettison the past and re-think their content. How are people using it now and how will they want to use it in the future? He urged them to give serious consideration to enabling researchers to build on that content in order to unleash its secrets and its relationships with other data and to look at new business models that will support the user of the future. For more details read two of Kalev’s articles that are published elsewhere in this issue. One is a more detailed account of his keynote address and the second is a reflection (with many concrete examples) on the growth in data mining initiatives over the last two decades and how these projects are impacting libraries and scholarship.
The audience question and responses for this session are as follows: Of these disruptive influences, which do you believe will have the greatest impact on the researcher’s workflow?
Volume and availability of data for mining and analysis – 52.3%;
Cognitive computing/smart systems – 21.5%;
Innovative means of interacting with devices – 15.4%;
New content sources and formats – 10.8%.
3.Librarians and publishers: Caught in the middle
Kalev was followed by a session entitled, “The user experience: Increased demands that we must satisfy”. The first speaker was David Shumaker, Clinical Associate Professor from the Department of Library and Information Science at the Catholic University of America, who spoke about publishers and librarians now being caught in the middle of two very different, but parallel, revolutions. The first revolution is between the conflicting ideas that information wants to be free and information wants to be expensive. This revolution was first identified by Steward Brand and has been raging for decades [1]. The second revolution is related to the current diverse methods by which we obtain and share information – print, web, video, audio, personal, social media, etc. – and he characterized it as a revolution between information wanting to be for machines and information wanting to be for humans. He then took it at a step further and suggested that we not view the world as though we are all caught in chaotic debates outside our control, but rather step back and look at yet another way we are caught in the middle – each of us is caught between the information that we gather for personal use and the system that we use to manage that information. We need to understand how we function now and how we want to function in the future.
Schumaker talked about all of his information sources – webinars, e-mails, books, blogs, databases, conferences, etc. – and how difficult it is for him to adequately manage that information. First, information access is not always easy as he is often not allowed to retrieve information unless he is working on his campus’ network. Once he has access to the information he is faced with the challenge of tracking it. He uses a commercial personal bibliographic management product (name undisclosed), but entries always need to cleaned-up and the import process is far from seamless. Also, he retains his documents in the Cloud and not in the bibliographic system. Maintaining those two in sync is hit and miss. He noted that he views information as a means to an end. Once the problem is solved he moves on and the information that has been used is not always stored in an easily-retrieved manner.
He said that he knows that publishers create really good tools that would eliminate his problem, but he doesn’t have the time to look for them and perhaps even lacks the funds to purchase one. He said that a potential solution to his problem is the use of librarians as change agents – especially “embedded” librarians who work closely with faculty and researchers, know their search behaviors, and know what they need. Publishers who have the tools that users want need to be able to get to the user and vice versa. Publishers know the librarians; the librarians know the users and can serve as an effective conduit for both.
In closing, Schumaker said that while we are caught in the middle of multiple opposing forces that are related and feed upon one another, collaboration between publishers and librarians can lessen the impact of the opposing forces on the average academic information seeker. He did not mean this to sound like a trite, simplistic, easy solution. He realizes that it is hard or the problem would have already been eliminated. But he has seen change already and is optimistic that there is more to come. See more from David in his article published elsewhere in this issue.
4.The design of information platforms
The theme of the session was continued by the second speaker, Peter Fox, Professor and Tetherless World Research Constellation Chair at Rensselaer Polytechnic Institute, who spoke on his experience with ensuring that users are a fundamental part of the process of platform development. He said that it was critical to start with a vision that is built around a specific goal or objective and that the process is collaborative and iterative with a heavy dependence upon use cases. In his projects the participants all shared a common research problem and they were rarely working from the same physical location or even for the same organization. They used information technology to work and to communicate and they assumed well-defined roles and status relationships within the context of the virtual group that were often totally independent of their role and status in the organization that employed them.
He shared a slide that he borrowed from Bill Rous at Georgia Tech that compared a traditional organization with a global collaborate group of researchers; e.g. in a traditional system a role may be management, the methods used in that role are command and control, the measurement is activities with a focus on efficiencies, and the relationships are contractual. In the virtual network the role is leadership, the methods are incentives and inhibitors, the measurement is outcomes with a focus on agility, and the relations are based upon personal commitments. The structure of a traditional system is usually hierarchical; in a virtual, complex network the structure is a heterarchy – a web of interconnections that requires a clear understanding of roles and relationships. A key feature of virtual organizations is a high degree of informal communication. Because of the lack of formal rules and clear reporting relationships, more extensive informal communication is required. Documentation takes on a major role. Fox also said that it is essential that a project has a value philosophy that focuses on outputs. The value must be derived from the benefits of the outcomes (not the outcomes themselves), the benefits should be relevant, usable and useful, and it is essential that the users or beneficiaries fully understand and appreciate the benefits.
He noted that they always use small development teams (seven to eight people) representing a mix of skills: a facilitator who knows the development method being deployed and who has the key skills that are required; two to three domain experts who know the resources, data, applications, tools, etc.; one or two modelers who extract the concepts; one or two software engineers who understand platform architecture and technology; and a scribe to write everything down. Fox noted that the social aspect is key because it is very much a team effort.
A diagram that shows the iterative steps of the development process can be seen at http://tw.rpi.edu/web/doc/TWC_SemanticWebMethodology. He noted that the technology piece comes at the very end because it is more important to understand the user needs before zeroing-in on a specific technology implementation. Each step in the process is taught in a week-long course at Rensselaer (see http://tw.rpi.edu/web/courses/SemanticeScience), and a template for building use cases can be accessed at: http://tw.rpi.edu/media/latest/UseCase-Template_SeS_2012.
The audience question and responses for this session are as follows: Which of the following do you believe might be of the greatest concern to researchers in the current information environment?
Appropriate assessment of the impact/value of their research by funders and peers – 62.9%;
Discovery of relevant materials once published – 24.3%;
Effective presentation of research in forms that promote subsequent re-use – 12.9%.
5.Emerging workflow tools
The final session of the opening day highlighted new startup companies that are providing value to users by providing solutions to current problems in information access and discovery – problems that traditional content providers are not adequately addressing. The most notable are described in the following paragraphs and their web sites are worth looking at.
The first was Kudos (https://growkudos.com), a web-based service that helps research articles get found, read and cited. Melinda Kenneway, Executive Director and Co-Founder, commented that the old adage of “Publish or Perish” has evolved in the digital world to “Be Discovered or Die”. She said that there is a long-tail of under-cited, under-utilized articles and that this lack of discoverability has a negative effect on a researcher’s career and reputation; on a publisher’s revenue and impact factor; on a university’s ranking and funding: and on the overall ROI and effectiveness of research funders. When forty thousand researchers were surveyed, eighty-four percent of the four thousand respondents said that they believed that more could be done to increase the impact of their work, and seventy-five percent said that they would use the Kudo tools that were described.
Basically, a researcher registers on the Kudo site and lists their published articles (each article must have a DOI). The related metadata is provided from CrossRef. The author can add additional information for each article; e.g. explain the article in plain language and why the research is important. He/she can also enhance the article by providing links to related ‘resources’ (videos, slides, blogs, media coverage, data, code, documents, etc.) that help to set the publication in context. Using the Kudos toolkit, the author can share links via social media, e-mail, etc., track the effectiveness of such sharing, and map it against the actions that they have taken. Each author has a dashboard, showing the activities that they have undertaken to explain and share their work. Tracked links then show the effect of these activities on key metrics, including full-text downloads and ‘altmetrics’. A fourteen-week pilot was undertaken in 2013. The results showed a nineteen percent higher article usage per day for articles shared using the Kudos tools compared to a control group.
The basic service is free for researchers who register. Funding agencies and publishers pay a fee for access to support tools, information on publication performance and author-sharing effectiveness within Kudos, and also to supplement the data set available to help authors evaluate the impact of their use of the Kudos tools.
Kenneway said that since May 2014, a quarter of a million researchers have been invited to register and Kudos now has 30,000+ registered users (12%), with 345,000+ publications claimed, 4,500+ publications explained, 1,500+ publications enhanced and 1,500+ publications shared. Two hundred and twenty-seven countries are accessing the content with the highest usage from USA (18%), UK (12%), India (7%) and China/Italy/Australia (0.5%). There are 10,000+ participating institutions, with the highest usage from Oxford, Cambridge, Imperial College, Edinburgh, UCLA and Harvard. And there have been 592,000+ views of Kudos publication pages and 13,500+ click-throughs to publisher sites.
Users to date can be categorized as follows: Professors: 32%, Faculty members: 18%, Lecturers: 8%, Post-docs: 7%, Research fellows: 6%, Research associates: 5%; and Graduate students: 5%. Subject disciplines represented are: Medicine and Medical Sciences: 17%; Chemistry: 11%; Life Sciences: 10%; Health Sciences: 10%; Materials Science: 5%; Business and Management: 5%; Engineering and Technology: 4%; Physics: 3%; Environmental Sciences: 3%; and Other: 4%.
The second impressive start-up was ZappyLab. Lenny Teytelmann, Co-Founder, talked about the fact that more than one million research papers are published every year and it is very hard to keep up with science. Also, there are often errors in those papers that inhibit reproducibility. He gave as an example the fact that he spent a year-and-a-half working on a project only to find that a basic biomedical protocol (equivalent to a recipe) was incorrect. Instead of one microliter he needed five, and it needed to be “cooked” for 1 hour, not fifteen minutes. He corrected it, told all of his peers who were working on the project, and called the researcher who published the original article. He could not write a paper on what he found as it was not original research, so other than the people he spoke with, everyone going to the original article continues to use incorrect information. He said that this type of error is not uncommon and that there is no infrastructure in place that allows researchers to share what errors they have found and corrected. So about three years ago he and a friend established protocols.io (http://www.protocols.io), a wiki-like platform that allows researchers to post and get credit for their discoveries and fixes. It also allows other researchers to discover corrections and optimizations of the methods that they use and get answers for trouble-shooting a technique. Protocols.io is free, it is mobile and cloud-focused, and allows researchers to store details on their experiments for future reproduction. Lenny has provided details on this innovative new service in an article published elsewhere in this issue – definitely worth a read!
Another impressive start-up is Sciencescape (https://sciencescape.org), a Canada-based organization that attempts to provide researchers with the most current information on topics of specific interest to them. Sam Molyneux, CEO and Co-Founder, noted that it is impossible to keep up with the amount of scientific literature being published. He said that in the biomedical field, about 2,674 papers are added each day. Sciencescape is an attempt to solve that problem. It has complete coverage of the biomedical sciences, updated in real-time from PubMed. With over twenty-two million papers that date back to the 1800s, Sciencescape takes extensive measures to ensure that not a single article is missed among the thousands of new papers published each day. A researcher signs up (the service is free) for the subjects in which he/she is interested and are notified when a new, relevant paper has been published. When they do a search, the most relevant papers (based upon the eigenfactor metric) are highlighted. The user can track new publications by author, institution, etc. Sciencescape has a partnership with Elsevier and in early 2014 closed a $2.5 million round of funding that was oversubscribed. I am sure that we will be hearing more about them in the future.
Another interesting start-up is Hypothes.is (https://hypothes.is), a non-profit organization that is building an open platform for discussion on the web. Dan Whaley, Founder, noted that there are few commenting systems and all of them are problematic. This service will allow for “addressable” web-based comments that can either be public or private for working groups. It leverages annotation to enable sentence-level critique or note-taking on top of news, blogs, scientific articles, books, terms of service, ballot initiatives, legislation, etc. The organization is funded by the Knight, Mellon, Shuttleworth, Sloan and Helmsley Foundations, and through the support of individual donations from those who believe in the concept. He said that the project is based on the Annotator project (http://annotatorjs.com) to which they contribute, and on the annotation standards for digital documents being developed by the W3C Web Annotation Working Group (http://www.w3.org/annotation/). The long-term goal is to allow for annotations through a document’s life cycle – from manuscript, published article, through its post-publication reviews, and to allow links to other documents.
The audience question and the responses for this session are as follows: Which of the following enhancements might be of greatest interest to the community of researchers served by your particular information product or service?
Anticipatory discovery – 50.8%;
Metrics of usage, impact, effectiveness – 25.4%;
Ability to annotate content – 12.7%;
Disambiguation of identity – 9.5%;
Sharing of relevant content – 1.6%.
6.Information: Who pays for it??
The second day of the conference opened with a plenary session, entitled “Information wants someone else to pay for it: As science and scholarship evolve, who consumes and who pays?” The speaker, Dr. Micah Altman, Director of Research and Head Scientist, Program on Information Science for the MIT Libraries, Massachusetts Institute of Technology, reflected on trends in the generation and use of durable information assets in scholarship and science, and on the changing relationship between consumers, purchasers and funders.
The first trend he identified was in the authorship of content. From the 1930s through the early 1960s the average number of authors per paper was two. This grew to 4.5 in the 1980s and to 6.9 by the year 2000. Now, with crowd-sourcing projects such as Galaxy Zoo (see http://www.galazyzoo.org), the number of contributors is in the hundreds of thousands.
The second trend has already been noted by other speakers, and that is the exponential growth in information with content filtering being more of an afterthought. And related to this trend is the fact that no institution can preserve all of the information upon which it relies.
The third trend is the diversity of information sources that have emerged as a result of information now being primarily in digital form: legal briefs, grants, software code, data sets, audio, video, etc. This diversity impacts information flow, curation, analytics, etc. Even education is changing with major institutions of learning offering “free” online courses to education seekers around the world.
Altman said that one economic principle is that everything gets to equilibrium in the long-run. But the fallacy here is that the “long run” can take a significant amount of time and that policies need to be made in the interim. He also noted that many things in the real world violate market assumptions. He offered no solution to the problem and stated the often-quoted line of “It’s tough to make predictions, especially about the future”. But did note that technology is not the answer – it is neither good nor bad, it is merely neutral [5]. Dr. Altman has written a very thoughtful article on the economics of the publishing market and why, despite the fact that the volume of content is increasing, the amount of competition has declined. It appears elsewhere in this issue and definitely is worth a read.
The audience question and responses for this session are as follows: Which of the following should rank highest as a priority for content and platform providers over the next three years?
Fostering new content formats in support of more effective presentation and communication of research findings – 43.9%;
Enhancing value via new platform functionalities and interfaces – 31.6%;
Developing new or adapting their existing business models – 24.6%.
7.Building platforms for sustainable use
The second session of the morning continued the prior day’s discussion on building information platforms that engage and excite users. The first speaker was Mark Jacobson, Senior Consultant, Delta Think, Inc. He noted that user expectations are increasingly influenced by their experience with tools and services provided by the large tech giants. Amazon and Google set the bar for search while Amazon and Apple set the bar for e-commerce. All three know their users’ history and preferences. He noted that about ten years ago, while implementing a Content Management System for a large publisher, they got to discussing search requirements. The users made it quite clear that they wanted Google-like search capabilities. In the following years some other interesting search and browse requirements began to surface with increasing frequency on various projects, particularly a request for search facets like Amazon. Recently there’s a lot of activity around the following: streamlining e-commerce; 360° view of the customer; and personalization, all of which are strongly-influenced by user experiences with the Apple store and Amazon.
Looking forward, Jacobson said that we can expect to see an even greater focus on the e-commerce user experience with ease-of-purchase and quick fulfillment/delivery and a continued focus on the 360° view of the customer with the implementation of a single sign-on and a social sign-on. He noted that users will continue to expect that the supplier will know about all their interactions with them – both online and offline – and will know their other online behaviors and interests; and there will be a continued focus on personalization – users like the supplier to recommend highly-relevant products and services based on that knowledge.
Mobile apps will also influence user expectations. The user interface is disappearing. Users string apps together to streamline their efforts and create custom work flows. Jacobson referred to this as the IFTTT (if this, then that: see http://en.wikipedia.org/wiki/IFTTT). He also said that updates need to be frequent – at least monthly, and preferably more frequent. And that suppliers need to integrate their platforms with the users preferred tools (e-mail, calendars, social media, etc.).
He said that is essential that content suppliers understand their customers’ other technology experiences and expectations. They need to “refresh” their technology about every three years so that they remain current and they need to be aware of how their internal systems interface with external systems and the Internet at-large. Some key technologies that they should be using are as follows: user identity management (e.g. SSO and CRM); marketing automation tools in order to identify traits across anonymous users; semantic technology to enrich content and user profiles; and recommendation engines. He closed with two principles: (1) become really agile and if you adopt a hybrid approach be sure you understand the impact of any compromises that you make; and (2) if your development problem is execution, changing methodologies and technologies won’t fix it.
The second speaker in this session on platform development was Alex Humphreys, Associate Vice President, JSTOR Labs and New Business Development, Ithaka. He said that his group is relatively new and that their mission is to seek out new concepts and opportunities and to refine and validate them through research and experimentation. He commented that to build a platform for sustainable use, developers need to know what will thrill users. Finding the right balance of technology, functionality, and design to both delight and energize users takes a thousand decisions, pivots and changes. Humphreys said that the JSTOR Labs team has been using Flash Builds – high-intensity, short-burst, user-driven development efforts – in order to prototype new ideas and get users to say “Wow” in as little as a week. He learned about this method from Marty Cagan [2]. In his highly-entertaining presentation, Humphreys described how they did just that during a whirl-wind, fact-finding week in Ann Arbor, MI, highlighting the partnerships, skills, tools and content that facilitate innovation. Humphreys said that the major ingredients for this efforts is a small, diversely-skilled team, a safe working environment in which failure is not penalized, the ability to get the entire team in front of users, a partner that can provide another pair of eyes/ears, and a pencil for taking notes, drawing prototypes, etc. The audience was impressed and you will be, too, if you take a look at Humphrey’s article that appears elsewhere in this issue.
The final speaker in this session was Dr. Tim Clark, a member of the Scientific Advisory Board for Open PHACTS, a European Union Innovative Medicines Initiative project driven by the needs of both academic and industrial partners involved in pharmaceutical research and development. Its core approaches involve using semantic web technology and information integration from multiple sources to solve real-world drug discovery problems. The project is supported by both public and private funding and is basically an ecosystem of partners trying to solve a common problem. Clark noted that the number of new drugs being created per dollar spent on research is declining and that on average it takes about $1.8 billion dollars to create one drug. The core of this project is for pharmaceutical companies to share activities that are non-competitive and one of these activities is the compilation and linking of the scientific literature with patent information, chemical structures, related data, etc. and building this database behind a firewall – secure and safe. Clark called this “pre-competitive informatics” and sharing the cost of this work reduces the overall cost to each partner while increasing R&D productivity. Their mission is to integrate multiple research biomedical data resources into a single open and free access point with the goal of delivering services to support on-going drug discovery programs in the pharma and public domains. Participants are leading academics in semantics, pharmacology and informatics, and the project is driven by solid industry business requirements. There are thirty-one academic partners, nine pharmaceutical companies, and three software developers and the work is split into clusters – the building of the technical system, the business aspect, and the creation of community and sustainability. To date the project has been successful and to ensure its ongoing sustainability the project has been turned into a non-profit foundation, the Open PHACTS Foundation, supported by an Associate Partner Community and subscriptions from major pharmaceutical companies. For more details on this innovative collaboration see an article on the project that has been published in Drug Discovery Today [9].
The audience question and responses for this session are as follows: Which of the following influences do you believe has the greatest impact on user expectations of research information services?
Products and services released by Google, Apple, Amazon or Microsoft – 68.6%;
Workflow demands imposed by their institutions or professional peers within a specific scholarly community – 22.9%;
Individual requirements for accomplishing research goal – 8.6%.
8.Incorporating new forms of content
The final session of the morning focused on the incorporation of new forms of content (video, audio, interactive content, etc.) into the research and educational environments. The first speaker was Victor Camlek, President of Camlek Concepts, who spoke on medical social networks as a source of new knowledge. Camlek noted that social networking is a ubiquitous part of our daily lives and that many professionals have been attracted to a variety of specialty networks that are able to deliver more value than a multimarket provider such as LinkedIn. His objective was to zero-in on Professional Social Networks of specific interest to physicians and other healthcare professionals (HCPs) who provide medical services or perform medical research and to demonstrate the potential value of these groups as an effective and revenue generating source of high caliber content.
He presented the following facts: one out of four U.S. physicians uses social media daily to seek out medical information according to a 2012 study published in the Journal of Medical Internet Research; About one out of seven physicians actually contribute content daily to a social media website while more than half stick to physician-only websites. Only seven percent said they were active on Twitter. In addition, 117 of 485 (24.1%) of survey respondents used social media daily or many times daily to scan or explore medical information. On a weekly basis or more, 296 of 485 (61.0%) scanned and 223 of 485 (46.0%) contributed. With regard to attitudes towards social networks, 279 of 485 respondents (57.5%) perceived social media to be beneficial, engaging, and a good way to get current, high-quality information. And with regard to usefulness, 281 of 485 (57.9%) of respondents stated that social media enabled them to care for patients more effectively, and 291 of 485 (60.0%) stated it improved the quality of patient care they delivered. The article concluded that while social media will never replace traditional channels of research and learning for the medical profession, it can be a valuable addition to a physician’s knowledge base – and a useful forum for discussion [7].
Camlek went on to describe some of the global social networks used by those in the medical profession, the content that they provide, how they monetize their efforts, and what risks they face, including patient privacy regulations, professional image protection, and rules of ethics and conduct. He concluded by saying that while these networks look promising, the major open question at this time is whether or not physicians will continue to have the time and willingness to participate. For more details, see Camlek’s interesting article that appears elsewhere in this issue.
Another interesting speaker in this session was Karie Kirkpatrick, Digital Publications Manager for the American Physiological Society (APS). She noted that scholarly journal publishers are testing and providing different ways for readers to view and engage with research. She noted that in her work she has found that what users really want are the following: better search and discoverability; a better User Experience; e.g., the navigation needs to make sense; more format choices (PDF, HTML, EPUB, etc.); more visual and interactive content; better subject collections, e.g., taxonomies and semantic technologies; no multiple logins; and far better catering to shorter attention spans! She then followed this up by highlighting a few of the projects that were underway to accommodate changing user needs – from exploring better user interfaces to providing new content formats. Many of her examples were from MIT Press since she had only recently moved to APS. One interesting project was MIT Press Batches which are topical collections of journal articles developed for the Amazon Kindle and priced at $6.99. There are nine to twelve articles per title selected by the editorial office. They identify the topics by looking at trends via altmetrics and analytics data and consult with journal editors after topics are identified. You can learn more at: http://www.mitpressjournals.org/page/batches. She noted that APS will soon be introducing graphical abstracts – a growing trend in publishing today – and that articles will be annotated with tweet-length abstracts in order to facilitate broad communication of an article’s content. She closed in saying that changes will be ongoing as users demand more on-target information in digestible amounts.
The audience question and responses for this session are as follows: Which of the following factors do you believe will play the greatest part in reshaping the scholarly record over the next 3 to 5 years?
Global demand for access to information across institutional/geographic/national boundaries – 53.6%;
Greater acceptance of new forms of output by the scholarly community, for purposes of awarding tenure and promotion – 42.0%;
The need to constrain costs for all stakeholders – 4.3%.
9.The changing landscape of scholarly communication
After the morning session closed, NFAIS members attended a members-only lunch during which Keith Webster, Dean of Libraries at Carnegie Mellon University, spoke on the changing landscape of scholarly communication. He noted that libraries remain crowded with students, but that the students are not there to talk to the librarian. They go to the library collaborate with their peers. Faculty and researchers have been driven from the library by the success of e-journals (a comment repeated by Judith Russell, Dean of Libraries at the University of Florida, in a panel session on the following day; see the article by Christopher Kenneally, the panel moderator, appearing elsewhere in this issue). Webster referred to an OCLC study done in 2010 [3] on the perception of libraries that showed that in 2003 thirteen percent of academic researchers began their information-seeking in the physical library; by 2012 that had dropped to one percent! That same study noted that in 2010 the vast majority of students (eighty-three percent) began their research by using a search engine and that seven percent began with Wikipedia. Zero percent began with the library website! An Ithaka study done in 2012 on faculty perceptions shows a steadily-growing view that the role of librarians is becoming less important because faculty has access to information online. They also believe (for the same reason) that funds spent on library buildings and staff should be re-allocated to other needs. He noted that since 1982 library spending as a percent of total university expenditures has actually been reduced by almost fifty percent (see ARL studies at: http://www.arl.org/focus-areas/statistics-assessment/statistical-trends#.VXGvks9VhBc).
Webster then went on to describe the evolution of libraries. He said that the first generation was collection-centric and the librarian was the in-house expert; the second generation was client-focused and the librarian’s main role was library instruction; the third generation was experience-centered and the librarian served as an information and technology specialist; the fourth generation was a networked library that provided a connected learning experience and the librarian remained the specialist; the current, and now fifth generation library serves as a collaborative knowledge, media, and fabrication facility with the librarian very much behind the scenes, working more in outreach efforts with faculty and students.
He went on to comment about other major changes in scholarly communication: the journal evolution from print to digital; the growth in scholarly output, not only in volume, but also in global source with China expected to outpace the United States in the near future; the growth in the number of authors per paper due to global scientific collaboration which is also impacting research workflows (a theme repeated throughout the conference); open access publishing; funding pressures, including publishing policies established by global government funding sources; open science, and so forth.
With all of these changes he believes that libraries will continue the migration from print to electronic and realign their service operations; that they will review the housing/location of their lesser-used collections; they will continue to repurpose the library as a primary learning space; they will reposition library expertise and resources to be more closely-embedded in the research and teaching enterprises outside of the library; and that they will extend the focus of their collection development from external purchase to local curation. He believes that librarians will be increasingly-embedded in research and teaching activities; that they will become campus specialists in areas such as e-science, academic technology and research evaluation; and that in these roles librarians will have meaningful impact. However, he noted that there are potential barriers to success, not the least of which is an unwillingness to change or lack of updated skills on the part of the librarians and the aforementioned perception of academics that librarians are not useful or credible partners.
With regard to science funding, Webster believes that the ever-increasing expenditure on healthcare in most nations will support continued expansion of the medical sub-segment of the STM market; that publishers will attempt to offset the decline in their print revenues through new solutions, for example by providing workflow solutions, analytics, and other cool ‘toys’; and that the R&D growth in Asia and the United States will continue to serve as the foundation for the STM market.
Webster’s presentation was excellent. In the absence of a full-text article, I strongly recommend that you take a look at his slides that are posted on the NFAIS website at: https://nfais.memberclicks.net/assets/docs/ANCO2015/webster.pdf.
The audience question and responses for this session are as follows: Which of the following do you anticipate as having the greatest impact on the success of your product or service in the next five years?
Continual progress in technology, standards and infrastructure – 63.9%;
Increasing public accessibility to research data and findings – 22.2%;
Evolving nature of scholarly outputs – 13.9%.
10.Interacting with content in new ways
The afternoon opened with a session that was focused on how users interact with content. The first speaker was Kate Lawrence, Vice-President of User Experience Research at EBSCO Information Services. Kate noted that today’s college student is an information-seeker who samples products, zeroing-in on and using those that meet their ever-increasing desire for efficiency. She said that the products that make a student’s life easier, faster, and better are the ones that become part of the student’s information world and that the products/services that she hears most about have common characteristics: they play to a students’ optimism, they position themselves as a shortcut, and they are perceived as “cool”.
She said that at EBSCO they have found that students approach their studies in the same way as they organize their lives. They look at priority (what deadline is first), take into account their personal interests (is the deadline in my major or for something else that is less important), and they look at the return on investment – are they getting the best results for the amount of time invested? She found that students find library websites to be a challenge. Too many choices are presented and they don’t necessarily understand the terminology. Students prefer Google and Wikipedia and they want answers, not links. And their use of the Internet is changing how they read and process information. The Internet is creating an “eye byte” culture. Online reading is non-linear reading and when working online there is offline competition for attention simply from the surrounding environment. Preparation for the SAT’s teaches students to skim and scan – there is no focus on “deep” reading. And it takes a bi-literate brain to switch between modes. She closed with the following recommendations for publishers and librarians when designing user interfaces:
Accept the influence of Google and Wikipedia and design for innovation that is rooted in familiarity.
Search Results is the page that matters. It should not be a pass-through, but a destination all its own.
Design for binary decision-making; users making a choice of Yes/No and Stay/Go on Search Results.
Create experiences that make students want to come back and make you a part of their own digital island infrastructure.
Kate goes into much more detail on why students prefer Google (it helps build their information-seeking confidence) and the lessons that she learned while working with the students in her article that appears elsewhere in this issue.
The second speaker was Pierre Montagano, the Business Development Director at Squid Solutions, who spoke on visualizing end-user workflows in order to drive product development. Founded in 2004, Squid Solutions has been providing tools that allow publishers to gather, visualize and analyze the data that is created by users of their information platforms. Montagano demonstrated how visualizing and interpreting the large amounts of data that is generated by end-user interactions with a content platform helps in making better product development decisions that will enhance the user experience on a publisher’s platform. It allows publishers to determine if product features contribute to a higher user-engagement and more full-text views, or if features simply clutter up the interface and drive minimal traffic. One can determine where the users came from (Google? A direct link?), their geographic make-up, how they navigate, how long they remain engaged, etc. For visual examples take a look at his slides on the NFSAIS website at: https://nfais.memberclicks.net/assets/docs/ANCO2015/montagano.pdf, or better yet, read his article that appears elsewhere in this issue.
The third speaker in this session was Alex Wade, Director of Scholarly Communications at Microsoft Research, who talked about Web-scale search and new approaches to academic information discovery. The main focus of his presentation was on how Microsoft is building Cortana, a product that is being billed as the “researcher’s dream assistant”, who will help them retrieve the answers that they need from the enormous amount of information available on the web.
For the past seven years a group of developers at Microsoft Labs have been building a semantic storage system around academic content. Starting with articles obtained from publishers, repositories and web crawls, they extracted semantic entities (e.g. authors, keywords, journal names, organizations, etc.) and built relationships between them. In addition, the information was enhanced with complementary content found on the web (photos, twitter tags, etc.). The system worked just as the developers had hoped, but unfortunately they had not taken into account how end-users would actually use the system and it was less-than-efficient. So they then went out to talk to users and looked at studies on user search behavior. One such study was an Ithaka faculty survey from 2012 (see http://www.sr.ithaka.org/research-publications/us-faculty-survey-2012) that indicated a steady growth in the use of search engines as a starting point for their research. That survey was supported by a Queensland University of Technology study on the information-seeking behavior of graduate students that showed that sixty-four percent of the students specifically used Google as their starting point [4] (this data is also supported by Kate Lawrence’s findings as discussed in her paper that is elsewhere in this issue). What they found by looking at their own search engine, Bing, is that only twenty-five percent of web search sessions deliver successful results, forty-two percent require refinements, and forty-four percent of the sessions last more than a day. As a result, Microsoft has taken a longitudinal approach to looking at user behavior, moving from analyzing queries to analyzing sessions. They also are moving from keyword searching to semantic searching, which requires that they have knowledge of the user (location, query history, preferences, etc.), have knowledge of the world (people, places, things, and the relationships among them), and in the end provide a user experience that is personal, anticipatory and conversational.
Microsoft has built, and continues to build, a knowledge graph that is the backbone of Cortana. Wade provided an example of a query which after typing in the single letter “h”, a series of possible search options were listed based upon his most recent prior queries that began with the letter “h”. He also gave an example of a “conversational” search session. His first question was “Who was in Guardians of the Galaxy?” The search results listed the cast of the movie. He then clicked on a specific actor and the search showed photos, news articles, etc. about that person. He then typed the question “Who is he married to?” And the result came back, with her name and other data. And he continued to drill-down, asking where she was born. The result came back with the city, state, and other data on that specific area – all very impressive.
Wade noted that academia is a subset of this knowledge graph. Microsoft has added twenty-five thousand academic journals, conferences, a comprehensive list of fields of studies, researchers, etc. and noted the relationships across all of this data. They have implemented author disambiguation and a pro-active discovery feature where Cortana notifies the searcher of events in their area based upon their specific preferences. They are doing beta testing with a slice of their user base in order to continue to refine the service. To see a video of Cortana go to: https://www.youtube.com/watch?v=r-4yNMSNGV4.
The audience question and responses for this session are as follows: In five years’ time, which do you believe will be the most widely accepted mode of interacting with a mobile device?
Speech (voice command) – 54.5%;
Gesture (tap, wave, swipe) – 30.9%;
Touch (keyboard input) – 14.5%.
11.Miles Conrad lecture
The final session of the afternoon was the Miles Conrad lecture. This presentation is given by the person selected by the NFAIS Board of Directors to receive the Miles Conrad Award – the organization’s highest honor. This year’s awardee was Tim Collins, President and CEO of EBSCO Industries, Inc., and an article based upon his presentation is published elsewhere in this issue. In that article Tim discusses issues such as the key drivers behind changes in the end-user experience, business models, library workflow, and the competitive landscape within the context of the academic research process. In addition, he gives his perspective on the role EBSCO has played in these changes and the evolution of the marketplace and on what changes we’re likely to see in the future.
The audience question and response for this session are as follows: In your personal experience, which of these aspects of advanced information services has most dramatically changed in the past 15 years (2000 – Present)?
User expectations – 78.4%;
Business model – 9.8%;
Search experience – 7.8%;
User interface – 3.9%.
12.Business models for collaboration
This final day of the conference opened with an interactive panel session moderated by Michael Cairns, CEO of Publishing Technology. The panelists were from four of the startup companies that participated in a session on the opening day (see Section 5): Melinda Kenneway, GrowKudos.com, Miriam Keshani, Sparrho, Sam Molyneux, Sciencescape and Lenny Teytelman, ZappyLabs (protocols.io). The objective was to discuss sustainability and business models for collaboration.
Cairns opened the session by asking each participant to talk about what they do, their business model, their customers, etc. Sam Molyneux began by saying that Sciencescape was created to help those in the biomedical field manage the literature in that domain, from learning the current happenings in real-time to delving into the archives of what has already been published. They plan future expansion into physics, chemistry and patents. They use social media, e-mail and speaking at conferences such as this to get the word out and also partner directly with publishers. When they first started they measured success by whether or not the technology they developed worked as planned, then moved on to content acquisition, user engagement and the number of users, and ultimately it will all be about revenue growth for sustainability.
Miriam Kesahani noted that Sparrho, founded in July 2013, is a recommendation search engine that focuses on recent research and what is emerging on the fringes of various scientific fields. The customers are researchers, post-docs, and even librarians – anyone who needs to keep up-to-date despite the volume of information that is being published. All of their technology was built in-house and is proprietary. They currently partner with the British Library who provides access to scientific information back to the 1890s. She noted that their measures of success are similar to what Sam outlined: validation of the idea, user growth (they have between eight hundred and one thousand visitors to their site each day), and ultimately revenue.
Kenneway of Kudos noted that the idea for her organization was formed ten years ago when she was working in marketing at Oxford University Press. It was around that time that pressure was forming for the development of article-level metrics for the evaluation of research. She knew marketing, but did not have the knowledge that her authors did regarding the value of their own research. Finally, when the field of altmetrics emerged she decided to develop a comprehensive tool set that would allow authors to promote their work. Their business model has two strands: (1) the tools are free for researchers; and (2) the revenue flows from the tools that have been built for publishers, universities, and funding bodies to monitor the publications that flow from their respective efforts. Kudos had revenue from the start so revenue remains very important to them as is registrations on their platform and user-engagement. Melinda noted that they are working with thirty-five publishers across the sciences, social sciences and the humanities. She said that the scientists are better at the sharing aspect and that those in the other communities are better at writing the abstracts and related materials to promote their work. She said that the driving factor for them is the fact that authors strongly believe that their work is inadequately promoted and that they are willing to take the primary responsibility for that promotion.
The final speaker was Lenny Teytelman from ZappyLab. He said that he was a geneticist and had no desire to be an entrepreneur. But his career was re-routed by a phone call from a friend who asked him about building an App. Lenny said that he would love to build one that would allow him to go through the lab protocols step-by-step so he could monitor his progress. They could charge four dollars an App to the millions of researchers who would use it and split the revenue with Apple. He would continue his university career with the benefit of an additional revenue stream. It was two weeks later that he thought that the App wasn’t enough – that there needed to be a way to capture and share changes to experimental procedures. Hence a new career began, both for him and his friend. They have four full-time staff and a team of eight students in Moscow, Russia doing Android and IOS development. They were able to get about $800K in seed and Angel investment funds and they invested $100K themselves, working from a basement to get started. Major pharmaceutical companies have expressed a lot of interest in the technology platform that they have built, but they do not want to sell it. They see a revenue model based on royalties from publishers and suppliers of chemical reagents. He noted that many people have tried to make this idea work and it certainly is harder than he thought. In fact the hardest part was making a commitment to the idea and walking away from the academic career that he was so very close to achieving. His measure of success is that scientists will use the system to promote the corrections/changes that they make to protocols and that as a result other scientists will not waste research time and dollars on experiments that are doomed to failure. If the only success is the technology in and of itself he will deem the initiative to be a failure.
Cairns asked how the companies went about growth. All said that they found raising funds to be more difficult than they thought, but that getting the user is the hardest part. Scientists are busy and you need their time – to register on the platform, to learn how to use the system, etc. It was the partnerships with publishers and other organizations that was the most successful. Having the partners link back to the company sites has worked well in promoting their products.
All of these startup companies are very much in growth mode, but I suspect you will be hearing more about them and I sincerely hope that NFAIS does a follow-up with them in a couple of years. It would provide a great case-study.
The audience question and responses for this session are as follows: In the context of working with start-ups, how would your organization prefer to introduce new technologies to an existing product or service?
Working with a partner, by licensing use of the technology – 55.2%;
Build internally to ensure seamless interoperability with existing systems – 32.8%;
Acquire the start-up and build on founder’s vision and problem-solving approach – 12.1%.
13.User demand and policy
The second session of the morning was also an interactive panel discussion that was organized to take a look at some of the issues that concern content providers and librarians, such as the re-use, replication and sharing of data, social media, data mining, etc. Moderated by Christopher Kenneally, Director of Business Development, Copyright Clearance Center, the participants were Judith Russell, Dean of University Libraries, University of Florida, Brian F. O’Leary, Principal, Magellan Media Consulting Partners, and Martha Whittaker, Senior Manager, Marketing Strategy, American Society for Microbiology. The group discussed how they use policy in an attempt to meet user needs and the challenges that they face in doing so. All agreed that publishers and librarians need to re-imagine their content (similar to the comments made by Kalev Leetaru in his opening keynote), that traditional mindsets must change, and that new policies have to be developed with input by all of the information community stakeholders. An article based on Kenneally’s transcript of the session appears elsewhere in this issue. Also, the podcast version is posted to the NFAIS website at: http://beyondthebookcast.com/the-shape-of-user-demand/.
The audience question and responses for this session are as follows: Which of the following do you believe is of greatest concern to adult users in adopting a new information application, technology or service?
The extent of control exerted over user behavior by the provider of the application, technology or service – 52.5%;
The level of protection surrounding the data gathered by use of the application, technology or service – 27.1%;
The amount of data gathered through use of the application, technology or service – 20.3%.
14.Predicting the future
The conference closed with a keynote by Michael R. Nelson, Adjunct Professor, Internet Studies Communication, Culture and Technology (CCT) Program, Georgetown University and Public Policy, CloudFlare, Inc., entitled Predicting the Future: What’s Next for the Net, the Cloud and Big Data?
He noted that when the first U.S. government hearing on the Internet was held in 1988 no one showed up. Now the Internet is used daily around the world. In fact, the amount of data that passes over the Internet today is hard to get one’s head around and it is predicted that annual Internet IP traffic will reach the zetabyte threshold by the end of 2015 (see http://www.theguardian.com/technology/blog/2011/jun/29/zettabyte-data-internet-cisco). The technology curve is now bending steeply and Nelson believes that in the next ten years we will witness as much change as we have experienced over the past two decades, perhaps twice as much change if we do it right. The tools that are fueling change are: cloud computing, the Internet of Things, broadband wireless, open source software and analytic tools, machine-learning, and the emergence of massive data sets that are often free (see Kalev Leetaru’s paper “Mining libraries: Lessons learned from twenty years of massive computing on the world’s information” that appears elsewhere in this issue). Nelson calls the combination of these tools the “Cloud Plus”, and said that it is increasing the supply of data and unleashing hidden information. But the supply of data is not the only driver of change. There is a huge demand because users now expect magic. They are used to talking to their smart phones to get answers to questions or turning on their home lighting from miles away with the push of a button. They want more! They want super powers and Nelson said that Cloud Plus is giving them just that. He gave the example of superhero, Batman, and how he had the ability to determine what was going on around him – a sort of “situational awareness”. Nelson pointed out that the streaming of live data is allowing that to happen today – with storm tracking, traffic patterns, etc. Even X-ray vision is possible and he referred to the Oculus Rift, a new virtual reality headset that lets players step inside their favorite games and virtual worlds. We even have superhero sidekicks like Siri and Cortana who are always there when we need them (reminiscent of Alex Wade’s presentation that was discussed earlier).
Nelson noted that the Cloud Plus is enabling new startup ventures and that this “startup economy” is the fastest-growing segment of our overall economy. The cloud and the Internet allow industries to tap into a global workforce for problem-solving and outsourcing. The combination even enables the “the sharing economy” where underutilized resources can be repurposed, with Uber and Airbnb being the most notable examples.
He said that so far we have gotten the Internet right. It has been allowed to grow and remain unregulated. It was born global and remains global, providing a world-wide platform that offers interconnectivity and facilitates economic growth. But governments around the world are starting to rethink this as they consider how they can “listen in” on Internet transactions. Nelson said that we need a better vision for the future of the Internet. We need smart privacy, data and security policies that protect information while encouraging innovation and that companies should err on the side of transparency. Today, the most successful Internet-related books are pessimistic about the future. He, on the other hand, is optimistic about the future – the key is getting the policies right. In closing he provided a list of reading materials that can be accessed on the NFAIS web site at: https://nfais.memberclicks.net/assets/docs/ANCO2015/nelson.pdf.
The audience question and responses for this session are as follows: Over the next five years, which of the following would you think might have the greatest impact on advanced information services?
Shifts in funding allocations for R&D in the enterprise and education sectors – 51.1%;
User concerns regarding widespread collection and subsequent use of all forms of data – 31.9%;
Reclassification (and potential regulation) of the Internet as a public utility – 17.0%.
15.Conclusion
Without intentionally doing so, the speakers at the conference reinforced one another in the identification of a number of industry trends and issues: (1) Big data and data mining are the key drivers in accelerating the growth of knowledge and allowing us to uncover relationships and meaning that until now was undiscoverable. As I noted in last year’s conference overview [6] and as Mike Nelson noted during his presentation, the best book on this topic is Big Data: A Revolution that will Transform How We Live, Work and Think by Viktor Mayer-Schönberger and Kenneth Cukier that was published in 2013; (2) Current information policies are inadequate to allow librarians and content providers to meet user needs, but the policies, and the mindsets that created them, are difficult to change; (3) Publishers need to rethink their data: how can it be dissected into usable bits? How can those bits be recombined to add more value? (4) Interfaces to digital content must adapt to the user, not the other way around; (5) Collaboration and partnerships are the new normal – publishers and librarians cannot do everything themselves; and (6) the economics of publishing is in transition – the ideal business models have not yet been identified.
But for me the most dramatic change in the information industry over the past forty years is that the gap between the user of information and the traditional creators/disseminators (i.e. the publishers and librarians) has closed considerably. When I was a college student the physical library was where scholarship took place and the librarian reigned supreme. Only the publisher had the financial resources, skills, and technology to transform manuscripts into books and journals. Users really had no active part in the publishing process other than as authors and readers. Today, many students and young researchers are far more technically skilled than many of the staff in libraries and in publishing houses, and the resources required to create digital books, journals, datasets and supporting software tools are far less expensive than in the past. As noted by Judith Russell in one of the panel sessions, a lot of the users are quite innovative in terms of trying to find new ways to interact with librarians or to find other ways to get information if they believe that the library is inadequate in meeting their needs. In that same panel session it was noted that perhaps today publishers are now on the outside looking in because they tend to look at problems in traditional ways. Technology has closed the gap between the user and publishers/librarians. If Mike Nelson is correct in saying that in the next ten years we will witness as much technological change as we have experienced over the past two decades, perhaps twice as much change if we do it right – will the gap be closed? And if it is, what will the information publishing ecosystem look like and what roles will publishers and librarians play? I look forward to watching the future unfold.
Plan on attending the 2016 NFAIS Annual Conference that will take place in Philadelphia, PA, USA from February 21–23, 2016. Watch for details on the NFAIS website at: http://www.nfais.org/.
Note: If permission was given to post them, the speaker slides are embedded within the conference program at: http://www.nfais.org/2015-conference-program (the term “slides” is highlighted in red).
About the author
Bonnie Lawlor served from 2002–2013 as the Executive Director of the National Federation of Advanced Information Services (NFAIS), an international membership organization comprised of the world’s leading content and information technology providers. She is currently an NFAIS Honorary Fellow. Prior to NFAIS, Bonnie was Senior Vice President and General Manager of ProQuest’s Library Division where she was responsible for the development and worldwide sales and marketing of their products to academic, public and government libraries. Before ProQuest, Bonnie was Executive Vice President, Database Publishing at the Institute for Scientific Information (ISI – now Thomson Reuters) where she was responsible for product development, production, publisher relations, editorial content, and worldwide sales and marketing of all of ISI’s products and services. She is a Fellow and active member of the American Chemical Society, and a member of the Bureau of the International Union of Pure and Applied Chemistry. She is also on the Board of the Philosopher’s Information Center, the producer of the Philosopher’s Index. She has served as a Board and Executive Committee Member of the Information Industry Association (IIA), as a Board Member of the American Society for Information Science & Technology (ASIS&T), and as a Board member of LYRASIS, one of the major library consortia in the Unites States.
Ms. Lawlor earned a B.S. in Chemistry from Chestnut Hill College (Philadelphia), an M.S. in chemistry from St. Joseph’s University (Philadelphia) and an MBA from the Wharton School (University of Pennsylvania).
About NFAIS
The National Federation of Advanced Information Services (NFAIS™) is a global, non-profit, volunteer-powered membership organization that serves the information community – that is, all those who create, aggregate, organize, and otherwise provide ease of access to and effective navigation and use of authoritative, credible information.
Member organizations represent a cross-section of content and technology providers, including database creators, publishers, libraries, host systems, information technology developers, content management providers, and other related groups. They embody a true partnership of commercial, non-profit and government organizations that embraces a common mission – to build the world’s knowledge base through enabling research and managing the flow of scholarly communication.
NFAIS exists to promote the success of its members and for more than 55 years has provided a forum in which to address common interests through education and advocacy.
References
1 | [[1]] S. Brand, The Media Lab, Penguin, New York, USA, (1987) , p. 202. |
2 | [[2]] M. Cagan, Inspired: How to Create Products that Customers Love, SVPG Press, (2008) . |
3 | [[3]] C. De Rosa, J. Cantrell, M. Carlson, P. Gallagher, J. Hawk and C. Sturtz, Perceptions of Libraries, OCLC, Dublin, OH, 2010, available at: https://www.oclc.org/content/dam/oclc/reports/2010perceptions/2010perceptions_all.pdf [accessed on 8 June 2015]. |
4 | [[4]] J.T. Du and N. Evans, Academic user’s information searching on research topics: Characteristics of research tasks and search strategies, The Journal of Academic Librarianship 37: (4) ((2011) ), 299–306. |
5 | [[5]] M. Kranzberg, Technology and history: Kranzberg’s laws, Technology and Culture 27: (3) ((1986) ), 544–560, available at: http://www.Jstor.org/stable/3105385. |
6 | [[6]] B. Lawlor, An overview of the NFAIS 2014 Annual Conference – Giving Voice to Content: Re-Envisioning the Business of Information, Information Services & Use 34: (1-2) ((2014) ), 3–15. |
7 | [[7]] B.S. McGowan, M. Wasko, B.S. Vartabedian, R.S. Miller, D.D. Freiherr and M. Abdolrasulnia, Understanding the factors that influence the adoption and meaningful use of social media by physicians to share medical information, Journal of Medical Internet Research 14: (5) ((2012) ), 117. |
8 | [[8]] D. Norman, J. Miller and A. Henderson, What you see, some of what’s in the future, and how we go about doing it: HI at Apple computer, in: Proceedings of the Conference on Human Factors in Computing Systems (CHI), Denver, Colorado, USA, (1995) . |
9 | [[9]] A. Williams, Open PHACTS: Semantic interoperability for drug discovery, Drug Discovery Today 17: (21,22) ((2012) ), 1188–1198. |