Battle of the BOTs

Battle of the BOTs


Posted on June 2017


Frank: Welcome back from Conference season, Doug.  By my calculation, I missed out on about one month of cookies at work. What was the most interesting thing you learned during your travels?
Doug: At Lavacon, Stefan Gentz from Adobe gave a very interesting keynote about how attention spans have been declining at a faster rate over the past decade.  The problem is compounded by the exponential growth in the amount of content.  So, there is an increasing need for content filters.
Frank:   How is that going to work out?
Doug: At Simply XML, I think I search Google at least half a dozen times a day.   I don't think I'm alone.  But the solution is not just about search.  Across organizations and among customers, BOTs are appearing to filter content before an information consumer is sometimes involved.
Frank:  Are you thinking of replacing me with a BOT?
Doug:  Of course not, Frank!  But there is something we all need to think about here.

Google as a Filter

Google probably knows more about me than you do, Frank. Google knows my search history. It knows what web sites I visit. I assume it knows other "stuff" as well.

Sometimes Google searches are eerily accurate. How did it know the apparent nuances of my simple search to provide, on the first page, links to the sites that are most relevant to my search?

But other times I need to search through pages of headings or a number of links, reformat my question, or find another way to get the information I need.

Where Access is Headed

Technical Publications and organizational web-sites use both links and search to help their users get to where they want to go. They use HTML, for sure, but underneath the base content architecture is shifting to XML. XML allows the disciplined application of metadata and this can provide information consumers with the right information, at the right time, on their device of choice.

What about the Rest of the Organization?

Back to Stefan Gentz. In his keynote, he talked about how, within the next few years, BOTs would filter content for people who need the right information, at the right time, on the right device. The implications for content creation and for reader access are dramatic.

Content will need to be intelligent with relevant metadata. Some of this metadata will be provided by the author, but some will be automatic based on the author's position, department, even other content that he/she has previously written. More importantly, there will need to be realistic content standards at the enterprise level.

The Enterprise Disconnect

At the beginning of my recent conference presentations I asked the audiences, "How many of you are implementing DITA?" More than half the hands went up. But then I asked the audiences "How many of your organizations are implementing DITA at the enterprise level?" No hands went up. We are seeing non-technical organizations walk away from XML and DITA because it is too complicated or too expensive to implement across an organization. Our perspective is that it is not complex because of DITA or XML. It is complex because organizations fail to realistically assess enterprise needs. DITA/XML is not the goal. Consistency, reuse, and flexible publishing are the goals.

Fundamentally, organizations need a way to structure content consistently and appropriately across the entire enterprise. The content will be tagged with metadata and will have XML underneath. But, IMHO, DITA and other XML structures need to be hidden behind the scenes. Authors who develop marketing materials, compliance information, and reports don't need to understand or use more than a few of the 640+ elements in the DITA tag set. And they certainly don't need specialized elements to augment the existing DITA tag set. They need a way of writing so that their paragraphs, tables, lists, topics, and sections fit together in a way that makes sense for readers.

There needs to be a vastly increased focus on structuring content so that authors can work efficiently and information consumers can actually do something with the content they read. This need plays to Simply XML's historical strengths in structured writing and in using MS Word as the UI at the front end of the content supply chain.

Please think about it. Does your organization need DITA? Or, does it need to implement a structured authoring standard that gets the right information, to the right person, at the right time, on the right device, in the information consumer's language of choice? The solution is not really about XML or DITA. The solution involves authoring and publishing structured content so that readers get what they need. It is also about providing information that they can understand and act upon. Consistency should be a key focus at many levels. And the efficiency and effectiveness of this process, from author to reader, will be greatly enhanced with the latest technology with shared repositories, modern publishing, reuse, and work flow management.

Bottom Lines

While continuing to focus on the simple application of XML for the enterprise, Simply XML will also emphasize the simple application of cognitively-based structured writing principles.

The battle is going to be fought at the enterprise level, driven by a new generation of consultants and technology providers. The BOTs, the authors, the readers, and the CFO's are going to love it.