To view the program in grid view or by track, please visit the Agenda page.
View the Text Analytics Forum PLUS its co-located events Final Program PDF. (Note: Access to sessions is subject to registration pass selected.)
Now in its 7th year, Text Analytics Forum, is the only conference covering all aspects of text analytics: text mining, data extraction, machine learning and the latest AI chatbots, sentiment analysis, auto-categorization, and more. Text analytics utilizes and applies taxonomies and other knowledge schema, adds intelligence to AI, makes enterprise search smarter, and enriches KM. The Forum also covers the entire range of applications that can be built with text analytics.
Text Analytics Forum is a place for sharing ideas and experiences in text analytics from beginner to advanced developers. We cover all aspects and approaches to text analytics including machine learning and AI, semantic categorization rules, build your own to advanced development-testing software, and human-machine hybrid applications.
Programming includes practical how-to’s, fascinating use cases that showcase the power of text analytics, new techniques and technologies, and new theoretical ideas that drive text analytics to the next level.
Monday, November 6: 9:00 a.m. - 4:30 p.m.
Upgrade to a Platinum Pass for your choice of two preconference workshops or access to Taxonomy Boot Camp, a co-located event with Text Analytics Forum 2023. Workshops are also separately priced.
Monday, November 6: 5:00 p.m. - 6:30 p.m.
Join us for the Enterprise Solutions Showcase Grand Opening reception. Explore the latest products and services from the top companies in the marketplace while enjoying drinks and light bites. Open to all conference attendees, speakers, and sponsors.
Tuesday, November 7: 8:30 a.m. - 5:00 p.m.
Upgrade to a Platinum or Gold Pass for extended access to KMWorld2023, Enterprise Search & Discovery and Taxonomy Boot Camp, a series of co-located events happening alongside Text Analytics Forum 2023. See the registration page for details.
Wednesday, November 8: 8:30 a.m. - 9:15 a.m.
Located in Capitol Ballroom
Our organizations have had a roller coaster ride with digital transformation over the last few years and most have embraced online work platforms. An organization's culture is created in the conversations between its members. But how do our enterprises encourage their people to interact, collaborate, and connect as they used to in a corporate building? How do they support learning, encourage deeper conversations and human relationships, and embrace and foster innovation? Speakers address these issues, discuss reclaiming conversations, share insights and experiences, and provide lots of tips and ideas for supporting KM joy, enhancing the flow of information and knowledge, and building stronger, more collaborative teams within the enterprise.
Sandra Montanino, Founder & Principal, Navig8 PD and formerly Director, Professional Development, Goodmans LLP
Kim Glover, Director, Internal Communications, TechnipFMC
Wednesday, November 8: 9:15 a.m. - 9:30 a.m.
Located in Capitol Ballroom
A long time KM practitioner and Deloitte’s Knowledge Capital Practice Lead, Eyal discusses how organizations can move beyond the tactical version of KM as a storage repository and view knowledge as source of growth for the organi- zation and knowledge as its most important asset. He shares how connecting knowledge within the flow of work can drive better outcomes and benefits. Gain insights and ideas into how to get into the mindset to help you capitalize your organization’s knowledge.
Eyal Cahana, Knowledge Capital Practice Lead, Deloitte
Wednesday, November 8: 9:30 a.m. - 9:45 a.m.
Located in Capitol Ballroom
Generative AI has real practical implications in search applications. This presentation discusses and demonstrates how generative AI, large language models (LLMs), and vector search can be used to create more natural and conversational search experiences, generate more comprehensive and informative search results, and personalize search results to individual needs and interests.
Kamran Khan, President & CEO, Pureinsights Technology Corp.
Wednesday, November 8: 9:45 a.m. - 10:00 a.m.
Located in Capitol Ballroom
With the advent of advanced conversational AI, the cycle of learning through dialogue is digitalized. This AI not only serves as an interpreter, bridging the gap between novice queries and expert content, but also ensures continuous accessibility of vital institutional knowledge. Moreover, it offers organizations the desired control and oversight, providing insights about conversation effectiveness. This technology thus opens up opportunities for unlocking and safeguarding knowledge within organizations.
John Lewis, Chief Knowledge Officer, SearchBlox Software Inc. and Explanation Age LLC
Wednesday, November 8: 10:45 a.m. - 11:30 a.m.
Located in Grand Ballroom, Salon 1
What are the current and future trends for the field of text analytics? Join program chair, Tom Reamy, for an overview of the conference themes and highlights and a look at what is driving the field forward. The theme this year was Text Analytics for Fun and Profit, but that all changed with the launch of ChatGPT. A major new focus will be on what GPT and LLMs can do well and what the limits are, especially inside the enterprise. What is the role of text analytics (TA) in this new AI world? And what other new ideas, techniques, and applications are being developed using traditional and new TA capabilities? We are also introducing a new session to take the place of our usual Ask the Experts.
Tom Reamy, Chief Knowledge Architect & Founder, KAPS Group and Author, Deep Text
In the ever-evolving landscape of search technology, we've progressed from simplistic keyword queries to the complex algorithms of generative AI. But as we stand on this advanced frontier, it's crucial to recognize that we are amidst a journey, not at its end. Today, we'll explore why generative AI is merely an intermediate step and explore what’s next.
Dorian Selz, CEO & Co-Founder, Squirro
Wednesday, November 8: 11:45 a.m. - 12:30 p.m.
Located in Grand Ballroom, Salon 1
In all the excitement around generative AI, it's easy to lose sight of the foundational role that text analytics plays in creating a practical, enterprise-scale application of this amazing new technology. Consider the example of automated question-answering for market and competitive intelligence research, in which a knowledge management system leverages highly specialized industry and technical content. The effectiveness and accuracy of the generative AI system—its ability to provide a meaningful answer to a researcher's direct question—depends first on selecting the best documents to analyze from within the corpus, and then identifying within just those documents the "summary worthy sentences" that contain the richest material. In effect, this means distilling the best-of-the-best information to formulate a single "best answer." Much of that preparatory work is driven by text analytics. In this session, Northern Light CEO David Seuss connects the dots between robust taxonomies, deep and consistent tagging, text analytics, and today's remarkable generative AI algorithms and models.
David Seuss, CEO, Northern Light
Wednesday, November 8: 1:30 p.m. - 2:15 p.m.
Located in Grand Ballroom, Salon 1
Jans Aasman is very much interested in how to build structured, reliable knowledge graphs from the ocean of unstructured text that is out there in the world. His thesis is that you need a collaboration between LLMs like GPT3/4 and very smart people that know a) the structure they want in their knowledge graph, b) how to do the prompting to get structured data out of LLMs, and c) most importantly, how to validate the output of LLMs. As an exercise, Aasman recently built a wine knowledge graph using GPT3/4. The LLM created the ontology and taxonomy, and it was used to fill the knowledge graph. Because you never can trust the output of GPT4 entirely, a validation approach was used that also employs a combination of GPT4 and web search. We were surprised at how well it worked. In the process, tools were created to make it easier to build knowledge graphs. Aasman shows demos in which SPARQL and some new magic predicates were used to orchestrate the building and validation of new knowledge.
Jans Aasman, CEO, Franz Inc.
Wednesday, November 8: 2:30 p.m. - 3:15 p.m.
Located in Grand Ballroom, Salon 1
Large language models (LLMs) such as ChatGPT are a hot topic and are being discussed from a variety of directions. There is a lot of excitement about what they can do, but also a lot of concern about how they do it. It's one thing to let these generative technologies write a poem or even this description of a talk (Nagy tried it, and the result wasn't bad). But can we trust them to give us the right answers to questions or make the right decisions? The clear downsides of this approach are the explainability of the results, the need for large amounts of high-quality data, and the potential for bias in the content generated by the models. One possible answer to these challenges is to combine these generative technologies with knowledge graphs that can help explain the results and support the generation of high-quality data to train the models. Combining generative AI and symbolic AI can ultimately lead to eXplainable AI (XAI) that can help in our daily work. Nagy shares the advantages of merging both technologies, shows how knowledge graphs help to create better and explainable results with generative technologies, and also shows how generative technologies can help to create better knowledge graphs. This combination will lead to the creation of intelligent content that enables better understanding and decision making.
Helmut Nagy, CPO, Semantic Web Company GmbH
Wednesday, November 8: 11:45 a.m. - 12:30 p.m.
Located in Rayburn, Meeting Room Level
Our speaker presents the journey to harness free text data for analysis and knowledge discovery through discussing benefits of deploying an automated system generating easy-to-comprehend insights and sharing lessons learned, from implementation approach to practical use, to ensure that NLP is indeed able to assist in delivering business value, especially when it comes to medically oriented datasets. There's no one-size-fits-all solution. Only a "fit-for-purpose" approach can promise the greatest impact.
Alice Chung, Senior Analytics Manager, Medical Insights Lead, Genentech and PMP, Certified Innovation Manager (GIMI)
This talk walks through how textual data from multiple channels, such as voice of the customer, call interaction, chat history, and surveys, can be passed through a series of NLP modules to effectively produce directly useable outcomes. One major approach demonstrated in this presentation is real-time interventions for immediate resolutions: utilizing an NLP pipeline to aid organizations in collecting customer information—their challenges/opportunities—augmenting it with past similar learnings, and forming a targeted directory of actionable insights that have helped to resolve the challenges/exploring the opportunities. This presentation also demonstrates the actual working solution that has been tested out numerous times on internal use cases and has been producing exemplary results. It goes over the individual NLP modules as well as how they come together to form an effective solution pipeline. Keeping enhanced customer experience at the core of the impact list, these approaches will also result in a centralized customer view, textual data-supported initiatives, higher productivity/lower cost, and reduced customer resolution time.
Wednesday, November 8: 1:30 p.m. - 2:15 p.m.
Located in Rayburn, Meeting Room Level
This case study outlines the implementation of a text analysis solution that automates the process of insight discovery across data from multiple sources of a pharma company. As an example, typical data processing needs and pains of the field medical team are discussed, a sound technological solution for automated insight discovery that helps each member of the team quickly focus their attention on relevant insights only is outlined, and additional context for every insight is delivered, while trending issues of interest and previously unknown emerging patterns are revealed. Ananyan shares the difficulties encountered during the incorporation of the insight discovery solution in the culture of a large organization, techniques for winning the support of the upper management, and finally the benefits of streamlining knowledge discovery processes across the organization. It includes a live demonstration of using the insight discover solution for accomplishing a couple standard business tasks encountered by the employees of a pharma company.
Sergei Ananyan, CEO, Megaputer Intelligence
Wednesday, November 8: 2:30 p.m. - 3:15 p.m.
Located in Rayburn, Meeting Room Level
A text analytics pipeline can consist of many operations, including text normalization, preparation, summarization, classification, entity extraction, and more. Monolithic or purpose-built applications may perform all of these functions. However, what happens when the application is too large or too complex to support, or when a business requires greater flexibility? This session discusses how businesses can apply a microservices architecture to text analytics solutions. It looks at ways in which enterprises can leverage readily available cloud compute, along with open source libraries and commercial APIs, to create scalable and manageable services that adapt to new content types and new business problems. It also looks at outcomes and lessons learned from a use case of migrating from a monolithic architecture to a microservices approach.
Dan Segal, Information Architect, IBM
Wednesday, November 8: 4:00 p.m. - 5:00 p.m.
Located in Grand Ballroom, Salon 1
Text analytics requires software, taxonomies, content, and rules. We’re setting up a software lab with software from a number of leading vendors and, with help from the audience, we will set up a series of short, targeted exercises. Some of the exercises include prompt engineering—refining ChatGPT responses, what kinds of terms improve autocategorization, how to extract not just data but also relationships, and more. Come prepared to participate: Does your suggestion improve results or make them worse? Let’s have fun, find out, and learn some valuable real-life lessons.
Kim Larson, Director of Client Experience & Success, Product, Luminoso
Sergei Ananyan, CEO, Megaputer Intelligence
Tom Reamy, Chief Knowledge Architect & Founder, KAPS Group and Author, Deep Text
Thursday, November 9: 8:30 a.m. - 9:15 a.m.
Located in Grand Ballroom, Salon 2/3
AI and the internet are transforming our understanding of how the future happens, enabling us to acknowledge the chaotic unknowability of our everyday world. Back when we humans were the only ones writing programs, data looked like the oil fueling those programs. But now that machines are programming themselves, data looks more like the engine than its fuel. This is changing how we think about the world from which data arises, and that data is now shaping as never before. We’ve accepted that the intelligence of machine intelligence resides in its data, not just its algorithms—particularly in the countless, complex, contingent, and multidimensional interrelationships of data. But where does the intelligence of data come from? It comes from the world that the data reflects. That's why machine learning models can be so complex, we can't always understand them. The world is the ultimate black box. Weinberger looks at the implications of this for people who work with data, those who share knowledge and insights inside and outside the enterprise, and those looking for ways AI can assist their organizations in future success.
David Weinberger, Harvard's Berkman Klein Center for Internet & Society and Author, Everyday Chaos, Everything is Miscellaneous, Too Big to Know, Cluetrain Manifesto (co-author)
Thursday, November 9: 9:15 a.m. - 9:30 a.m.
Located in Grand Ballroom, Salon 2/3
How do companies deliver AI capabilities across their organization? How can an organization build and leverage AI tools without having to develop multiple intelligent technologies for different applications? What’s the best way for organizations to build and evolve the large datasets needed to drive the most powerful, emerging AI tools? Centralizing AI capabilities into an enterprise data hub with reporting tools and advanced integration capabilities allows companies to leverage their investments more fully, bringing new and evolving AI capabilities into play quickly by leveraging a common data hub to build rich machine learning models. For KM, this means organizations can build powerful new capabilities. Hear how organizations are doing this today!
John Chmaj, Senior Director, KM Strategy, Verint
Thursday, November 9: 9:30 a.m. - 9:45 a.m.
Located in Grand Ballroom, Salon 2/3
Hoeffel delves into the realm of KM and explores how generative AI can revolutionize the way organizations capture, organize, and utilize information. Discover how LLMs and AI can enhance knowledge discovery, automate content generation, and improve decision-making processes. Through compelling demos and practical guidance, gain insights into leveraging generative AI to unleash the full potential of search and KM efforts. Discover how this powerful combination can drive innovation and propel your business forward.
Patrick Hoeffel, Head, Partner Success, Lucidworks
Thursday, November 9: 9:45 a.m. - 10:00 a.m.
Located in Grand Ballroom, Salon 2/3
Discovery is a key factor for knowledge management, and helps people to find the right information at the right time. Knowledge graphs enable simple, intuitive discovery, but can be time-consuming to create and manage. Today, we are at the cusp of a new era in improving content discovery with automation. Traditional automation tools—such as auto-tagging, auto-classification, and inference-based rules—can now be used together with LLMs to make knowledge graphs easier to create and more powerful in powering content discovery. Get insights into how these new approaches can provide information of higher quality, accuracy and reliability, powering better content discovery and ultimately providing more effective use of organizational knowledge.
Nimit Mehta, CEO, TopQuadrant
Thursday, November 9: 10:15 a.m. - 11:00 a.m.
Located in Grand Ballroom, Salon 1
With the rapid pace of developments in the field of natural language processing, it is critical to monitor changes based on looking at arXiv and conference proceedings. Papers are ingested and automatically tagged based on a hand-crafted set of facets. Some of these tags are based on classification and some on entity extraction. What kinds of tags are large language models good at generating? What kinds of prompts are effective? What kinds of tags benefit from alternative approaches? This talk describes this open source monitoring system and discusses strategies for effective tagging with large language models.
Mark Butler, VP Engineering, Voise, Inc.
Thursday, November 9: 11:15 a.m. - 12:00 p.m.
Located in Grand Ballroom, Salon 1
In today's fast-paced digital world, knowledge is critical to success. But with the overwhelming amount of text data being generated every day, it's become increasingly difficult to find precisely what you are looking for. This is where vectorization comes in, enabling us to represent unstructured data in a structured and analyzable form. This presentation shares the speaker’s experiences in using character-level vectorization approaches and transformer-based approaches to map knowledge in our organization, as part of our knowledge management strategy. We discuss how we have leveraged these techniques to represent both the formal knowledge products that we produce and share with the public, as well as the experiential knowledge of our personnel. It highlights the benefits and challenges of using different vectorization techniques in different contexts. For instance, we'll showcase how character level vectorization approaches have been particularly useful in assembling a puzzle of data related to experience among our personnel for expertise location, while transformer-based approaches have excelled in increasing the relevance of search results when dealing with larger text data such as publications. By the end of this presentation, attendees have gained a deeper understanding of how vectorization can be used to map knowledge in an organization and a few approaches that can be used to represent both formal and experiential knowledge.
Kyle Strand, Lead Knowledge Management Specialist and Head of Library, Inter-American Development Bank (IDB)
In today’s polarized world, sustainability solutions require skillful conversation. Text analytics can help. At the University of Maine, we asked, “What are the features of sustainability conversations that lead to innovation, cohesion, and intent?” First, we captured transcripts from aquaculture town-hall conversations which could be fraught with conflict and debate. Next, we coded the transcripts for rhetorical intent, or what we call “discussion disciplines”—statements, questions, positivity, acknowledgments, synthesis, and even snarkiness. In addition, from the transcription, we developed a set of terms for identifying outcomes. Then, we evaluated several machine learning approaches, such as TF*IDF, Google’s (open) BERT, and a combination of BERT and ResNet. We found the large language models such as BERT recognize the discussion disciplines with the greatest accuracy, compared to the human-coded data. We used the “winning” model to ingest more than 21,000 open source utterances, labeled each for the discussion disciplines, and labeled each transcript for its likely outcomes. With this large dataset, we found that acknowledgment and positivity have a positive, large, statistically significant impact on intent. Now, using similar models (even hand-coded) and careful observation, sustainability leaders can be better equipped to change the tone, innovate, and get diverse collaborators focused on positive environmental and societal impact.
Katrina B Pugh, Lecturer & President, Columbia University & AlignConsulting
Thursday, November 9: 10:15 a.m. - 11:00 a.m.
Located in Rayburn, Meeting Room Level
When organizations are faced with moving decades of content to a new digital platform, there are quite literally millions of details to manage. Re-platforming peer-reviewed content to a new hosting environment is a painful process and a project wrought with concern. However, moving content to a new platform can bring about many positive side effects and results in a healthy digital transformation. A platform migration offers the unique opportunity to dive deep into text analytics and content details to reveal and correct issues in content markup that impact downstream discovery. This case study presentation discusses AIP Publishing and its move to a new content platform for its portfolio of highly regarded, peer-reviewed journals, including a growing collection of open access titles, that cover all areas of the physical sciences. A massive amount of content had to be analyzed, converted, and delivered to a new platform—47.4 gigabytes of XML and 1.73 terabytes of assets! No link could be overlooked, and no asset could go missing. Gross demonstrates the tools used to analyze and validate XML files as well as to health-check the corresponding digital assets (e.g., verify that for every image there is at least one callout in the XML and for every callout there is an image). Findings from the analysis were grouped into categories—Summary Analytics and Errors and Warnings. Issues in XML structure were identified, providing the road map to convert the entire collection that is now optimized and architected for decades of future success.
Mark Gross, President, Data Conversion Laboratory
Huge leaps in computing capacity, storage, and cloud computing and developments in AI over the last decade have rapidly accelerated commercially focused scientific and engineering progress in areas that include increasing battery efficiencies in EVs, generating new antibodies, powering computational drug discovery, and even semiconductor chip manufacturing. These advances have led to the need for vast amounts of data for text and data mining. The mining of highly validated, peer-reviewed scientific literature is complementing other resources such as clinical trials, patents, industrial datasets, and even SEC filings to uncover patterns in improving the quality of drug discovery and applying it to a broad spectrum of key fields such as energy and finance for predictive analytics. This talk covers some of the use cases where mining alternative datasets from scientific literature is helping generate new derivatives and IP across multi-billion-dollar domains
Prathik Roy, Product Director, Data Solutions & Strategy, Springer Nature
Thursday, November 9: 11:15 a.m. - 12:00 p.m.
Located in Rayburn, Meeting Room Level
Large language models, generative AI, and NLP have entered the domain of everyday conversation thanks to the disruption of ChatGPT. What was once a niche area is becoming more widespread. Have you ever been curious about the genesis of NLP? In this talk, Osborne introduces attendees to an array of characters from obscure names in linguistics like Ferdinand de Saussure, well-known mathematicians like Alan Turing, and the pioneers of the transformer neural network architectures, Vaswani, et al. Discover how this field has evolved and advanced to arrive at mind-blowing technologies like ChatGPT. Throughout, attendees learn about both tried and true techniques and where the domain is headed.
Mary Osborne, Senior Product Manager - Natural Language Processing, Analytics R&D, SAS and Duke University
Join Sabo and Swilley for a session where they highlight datasets which are both full of content and readily accessible for text analytics exploration. They also explore the different text analytics methods that are applicable to go from questions to decisions for each of these datasets. The academically minded learn about useful datasets to leverage in class or for that next NLP class project. The industry-minded learn about public sector data that can provide intelligence and marketing signals. And each of these datasets have some relevance to government and NGO workers as they have largely come from that sector. The pair highlights how text analytics and NLP, applied properly, can save lives and improve the quality of life.
Thursday, November 9: 12:15 p.m. - 12:30 p.m.
Located in Grand Ballroom, Salon 2/3
GenAI is the next make-or-break moment for KM leaders. However, GenAI alone won't reduce the significant challenges associated with content governance. Join this session to learn how search augments AI’s capabilities. Uncover 11 cutting-edge machine learning models (including generative answering) used by over 600 leading enterprises to deliver world-class digital experiences every day.
Juanita Olguin, Senior Director, Product Marketing, Coveo
Thursday, November 9: 12:30 p.m. - 12:45 p.m.
Located in Grand Ballroom, Salon 2/3
KMWorld magazine is proud to sponsor the 2023 KM Awards, KM Promise and KM Reality, which are designed to celebrate the success stories of knowledge management. The awards will be presented along with Step Two’s Digital Awards.
Thursday, November 9: 1:00 p.m. - 1:45 p.m.
Located in Grand Ballroom, Salon 1
Ontology is over. Controlled vocabulary is dead. GPT renders them pointless, and good riddance. Whatever their (very real) limitations, large language models are really good at finding synonyms. "Did another term appear in this same context?" is exactly what they do. So no one needs to create any more committees to control terminology. Sure, pick terms so you can talk to your industry sensibly, but don't tell people what they can and can't say, or what goes where. GPT4 knows, and so did GPT3. There is a bit of a training lag, and we all (humans too) are learning new words: mpox and poxvirus, SARS-COV-2, COVID (sounds like “covfefe” but isn't), so we need to capture the latest terms, and MeSH has been helpful for medical use. But, going forward, ontology isn't the right approach. Even metadata is losing its grip: Contextual information like date and location are tagged onto images automatically, and other tags, mostly, can now be derived automatically (with some safeguards). Thanks in part to all those hard hours spent adding tags to images, still and moving, the language models have learned what we've been trying to teach them, and we can now stop and let them do what they do best.
Sharon Flank, Principal, DataStrategy Consulting, LLC
Dave Forbes, UX/HCD Consultant, DataStrategy Consulting, LLC
How can taxonomies be extended and enrichened to power autocategorization of content to provide more value and scale for your organization? How can you improve your text analytics performance through the use of enterprise taxonomies? By marrying the strengths of information and science with the technologies of data science and placing a human curator at the center, you can easily extend or integrate enterprise taxonomies for autocategorization of enterprise content. Downs describes the differences between traditional information science and data science approaches before outlining a process that places a human-in-the-loop to harness their full capabilities. She demonstrates multiple case studies where enrichened taxonomy metadata can drive autocategorization of content. Through these case studies, Downs demos the iterative work of a taxonomist or ontologist in bridging the gap between human-readable taxonomy concepts and computational/NLP algorithms generating machine-readable content.
Sarah Downs, Director, Synaptica Client Solutions, Synaptica, part of Squirro AG, UK
Thursday, November 9: 2:00 p.m. - 2:45 p.m.
Located in Grand Ballroom, Salon 1
There is a vast amount of information embedded in unstructured content. People at research-intensive, data-driven organizations often face the challenge of finding the right content at the right time and connecting insights across documents. Knowledge graphs target this challenge head on—bringing structure to unstructured content and creating meaningful relationships to power information discovery. What do you need to build such a knowledge graph solution out of mostly unstructured documents? In this talk, attendees learn a framework for defining knowledge graph architecture with guidance and considerations at each of the architectural components— from extracting entities and relationships to populating a graph database and enabling downstream applications. Learn how to strike the right balance between deterministic and statistical approaches to populating a knowledge graph depending on the AI maturity of your organization in the context of proven case studies at organizations in the pharmaceutical and enterprise learning industries.
Urmi Majumder, Principal Data Architecture Consultant, Enterprise Knowledge, LLC
Sara Nash, Principal Consultant, Enterprise Knowledge LLC
How will large language models (LLMs) like ChatGPT and GPT-4 impact the future of enterprise knowledge management? Hilger and Hamilton explore this question as a shift from static computational systems to more dynamic, interactive tools that actively participate in an organization’s processes, content, culture, and technology. The combination of fluid intelligence systems like GPT with text analytics, databases, and knowledge graphs will create enterprise cognitive architectures: systems that augment and even automate cognitive labor.
Ethan Hamilton, Analyst, Enterprise Knowledge LLC
Joseph Hilger, COO, Enterprise Knowledge, LLC
Thursday, November 9: 3:00 p.m. - 3:45 p.m.
Located in Grand Ballroom, Salon 1
How does one effectively explore and extract insights from large collections of highly heterogeneous documents across a diverse set of use cases? The Institute for Defense Analyses (IDA), a nonprofit corporation operating three federally funded research and development centers, helps government sponsors answer challenging questions that lie at the intersection of national security, science, technology, and policy. These questions often involve making sense of large numbers of documents and files. Moreover, factors important to one question may not be important for others. Most existing solutions to document analysis and review employ a one-size-fits-all approach that cannot be easily adapted to different use cases. IDA has developed in-house text analytics capabilities to facilitate search, exploration, and analysis of large collections of files in ways that enable higher degrees of flexibility in answering research questions. An extensible set of machine learning and NLP methods can be targeted and applied in real time to subsets of documents of interest through a no-code, point-and-click, web-based user interface, which allows for more focused insight extractions. In addition, programmatic APIs are accessible, both locally and remotely, which allows customizing document processing and analyses to a wide range of different use cases without requiring source code modifications. For instance, using the API, documents have been ingested from diverse data sources, and custom machine learning models have been trained and applied to auto-label documents to make them findable or filterable. This talk provides a high-level overview of this work with illustrative examples.
Margaret Zientek, Research Staff Member, Institute for Defense Analyses (IDA)
Thursday, November 9: 4:00 p.m. - 4:15 p.m.
Located in Grand Ballroom, Salon 2/3
The age of generative AI is changing the nature of work, and nobody is more impacted than the knowledge manager. While this transformation may seem overwhelming at first, GenAI offers knowledge managers an opportunity to play a mission-critical role in the automated future. Former knowledge manager and AI leader Sedarius Tekara Perrotta explains why knowledge management is evolving into a critical and strategic role over the next 2–3 years and provides specific advice on how to position KM expertise to take advantage of one of the biggest business trends in decades.
Sedarius Tekara Perrotta, KM Practitioner, AI Consultant & CEO, Shelf
Thursday, November 9: 4:15 p.m. - 5:00 p.m.
Located in Grand Ballroom, Salon 2/3
Join members of the KMWorld community as they provide insights, inspiration, key ideas, and innovations shared at this year’s conferences as well as what our panelists are seeing within the rapidly changing field of KM.