AIBigDataCloud.com is for sale!
Big data, AI, IoT, and cloud computing are different technologies, and each one of them has emerged and evolved in independent ways. But for some years, there are developing interdependence and opening up new possibilities of innovations, enhanced efficiency, and productivity benefits.
For most industries and enterprise environments, these technologies seem to work in unison or in an orchestrated manner to drive innovations and automation in business processes. This is why we need to take a look at the relationship between these technologies.
- Big data and cloud computing: The exponentially growing big data found its natural ally in a cloud-based analytics platform. Thus Hadoop emerged as the cloud-based analytics platform for Big Data. Soon, other services followed the example, and we got services like Amazon Web Services (AWS). The emergence of Analytics as a Service (AaaS) model is an outcome of this.
- Internet of Things and Big Data: Internet of Things candy as a promising technology based on the device to a connection that can streamline operations between machines and between machines and humans. As IoT devices are producing huge volumes of digital data-rich with user insights, Big Data analytics engines through cloud platforms are accessing this data to produce relevant and useful consumer insights.
- Internet of Things and AI: Artificial intelligence or AI is allowing machines to behave and interact like humans by mimicking human reasoning capability. Connected IoT devices loaded with AI-based algorithms are increasingly finding such AI-powered maneuvers useful for facilitating automation and streamlined functioning of devices.
IBM® Storage solutions for AI and big data accelerate data science, modernize application development, simplify DevOps and optimize large amounts of videos, images or other object data types. Our global data platform unifies your storage from edge to core to cloud, so you get faster access to more data to power your game-changing AI initiatives.
Leveraging the Digital ABCs: AI, Big Data, and Cloud Computing
By Hong Zhou, Megan Prosser
Editor’s note: Today’s guest post is by Hong Zhou, Director, Intelligent Services Group, with editorial support from Megan Prosser, Senior Director of Marketing, both at Atypon
At the London Book Fair (LBF) — back in person for 2022 — I was invited to address LBF’s Research & Scholarly Publishing Forum, organized to explore how new technology and new ideas can advance our industry.
My talk focused on a practical approach to leveraging Artificial Intelligence (AI), Big Data, and Cloud computing — what I like to call the digital ABCs — throughout the scholarly publishing workflow, as well as tackling some common misconceptions about them. AI is a broad set of technologies in which machines use computational capabilities to “think” like humans; AI generates value primarily from Big Data, which is collected, stored, and processed to deliver value to customers via the Cloud. Together these digital ABCs unlock a powerful set of opportunities.
At events like this, my mission is to show scholarly publishers and societies that AI adoption is achievable, accessible, and empowering. AI can supercharge discovery, enrich content; it can help publishers better understand their audiences, and enable individual researchers to know more, do more, and achieve more — using not necessarily the most advanced solutions, but the right solution to solve a specific problem.
And AI can advance your business, and ultimately our entire industry, from content providers to knowledge providers. For example, if a phone directory is the content, then the address you need at a given moment, for a given purpose, is the knowledge.
AI applications unlock value by unlocking knowledge from data using advanced AI models, then relentlessly calibrating and refining that capacity by learning from humans — both directly (e.g., from user behaviors) and indirectly, from the content we generate (including text, images, video, and audio).
I enjoy sharing specifics of the solutions my Atypon team designs for clients, but far more important for most scholarly audiences (outside the tech trenches) is getting across the general principles that apply to every scholarly publishing organization, no matter their size, budget, or the specific nuances of their needs and objectives.
Everyone in our industry should be excited about AI, because while the underlying technology is structurally complex, the practical opportunities are far simpler than you might think.
And that’s entirely understandable! Buzz tends to create blur, and the hype around AI can be overwhelming, even paralyzing. Misconceptions lead to missed opportunities, so my first step is always to defuse apprehensions and anxiety.
This includes setting out the key benefits already available to scholarly publishers from the digital ABCs.
AI benefits for Scholarly Communications
Perhaps the most important misconception-disarming point is that successful use of cognitive technologies does not replace human capabilities: instead, AI tools augment human capabilities.
By automating routine but necessary processes, AI can remove burden; by analyzing millions of data points, it can gain insights and make predictions, and, in doing so, radically extend our capacity for making better decisions, faster.
At their core, most AI applications can be distilled down to prediction, enrichment, and connection. AI applications can help humans better understand different types of content, such as text, images, and videos; AI can also help content creators and distributors to better understand their audiences. In other words, AI helps us build stronger connections between people, connecting people with the right content, and finding vital connections or associations between different pieces of content.
Better predictions and connections, drawn from large (ideally enormous) collections of refined “big” data — the B in our digital ABCs — translate into better results, better experiences.
Then, as data-driven decision-making yields greater accuracy and better results, AI automation will naturally increase an organization’s overall efficiency, and thus lower its costs.
Implementing AI tools will pay for itself, and then some.
And you don’t even need to make a huge upfront systems investment. In fact, incremental adoption works better for most organizations thanks to cloud-based AI services, with better platforms and partners to choose from coming online every day.
Like many path-breaking technologies (think microchips or mobile phones), the costs of AI will come down with broader adoption, while quality goes up exponentially.
Let’s look at some real-life examples of how AI solutions applied to the scholarly publishing workflow and ecosystem are already changing the game (and making life easier).
AI applications can help authors with tasks from composing a compelling abstract to identifying which journal is best to target for a particular paper. AI can even recommend collaborators by scouring other publications and academic profiles, while also surfacing the most relevant existing articles or studies to read and reference.
Many slow or tedious chores like citation checking can also be radically improved and accelerated. Popular culture may be telling you to think robot, but, again, think prediction. At its core, AI races through a sea of data to the best option. Look at how AI powers popular dating apps to improve compatible partner-finding. Does it replace dating? No: it augments the process to empower humans. AI doesn’t supplant our judgment so much as replicate it at scale.
Of course, the data must be good, and that’s where high-quality Big Data comes in. (Remember GIGO: garbage in, garbage out.) AI tools can even help authors assemble a rough first draft, or suggest different structures for a study.
AI is an agile collaborator.
Submission and Review
Intelligent article screening? Absolutely. Originality and reproducibility checks? Check! Research integrity screenings, journal and reviewer suggestions … the list goes on.
AI-fueled applications can be the first line of defense and offense when it comes to tackling (and accelerating) many steps in the publishing process, including the indispensable, but previously slow, stages of submitting and reviewing content.
We know that vital scholarly research is too often delayed, and sometimes falls through the cracks, never getting published. We also know that current processes and resource levels are not nearly adequate to handle the volume of submissions, especially given the increase in submissions to publications — especially open access publications.
AI can apply human standards while casting and filtering a much wider net.
One of the most common AI applications today is automatic content classification. AI can automatically discern and label/tag the topics and areas of study your content relates to, in order to optimize discovery and content creation.
AI applications can also transform flat, image-based PDFs into indexable data. Others can transcribe speech into text, even in real time.
Some apps are better at this than others, of course, but AI is driving that technology forward. AI is learning, in effect, how to hear us better and capture what we say more faithfully. Now AI applications can turn an important spoken presentation or interview at a seminar into indexed, searchable content.
Discovery and Dissemination
Another ubiquitous AI-fueled application is search and recommendation, which I like to think of as the research assistant most of us need but few of us could previously afford. AI can cast a vastly wider and “smart” net to locate and surface content that interests us as soon as it’s published.
Personalized news feeds are a basic example. More refined, sophisticated discovery tools will let us have natural-language conversations with the technology, both to explain what we’re looking for and to query that content directly once it’s surfaced.
Again, this isn’t magic. It’s powerfully more precise forms of prediction that learn from us, the humans. It’s tech that progressively refines itself while delivering a far more personalized experience.
And here, again, AI is about augmenting our strengths. Does AI have limitations? For the time being, absolutely. Chief among them: it can only be as effective as our asks and tasks are well conceived, and only as incisive as our data is accurate. (Advising on these parameters is where AI experts come in.)
Could that change in the future? Yes.
Where to Begin Your AI-Centered Digital Transformation
“Okay, Hong,” you’re thinking, “this all sounds wonderful, inspiring, encouraging, convenient—but how do I get started? Where do I get started?”
These four questions are a great place to start:
- What problems are you trying to solve?
- Who are your users?
- What are your organization’s main strategic priorities and objectives?
- What data do you have, and what condition is it in?
But here’s the key: To start catching the ABC winds that will propel and guide your digital transformation, you don’t necessarily need the most advanced solution; you just need one that works for you.
Then take it one problem at a time, carefully measure your success each step of the way, and communicate it, specifically to senior management to get their buy in — and steadily build your portfolio of solutions.
My next post will dive deeper into how AI can support each phase of the research journey, using real-world examples and focusing on making the most of limited resources.
Real-world demonstration is key. My time at the London Book Fair was a nice reconnection to the “real world” — and to the curiosity of real humans focused on solving real-world problems across our community.
AI is all about those problem-solving humans, whether they realize it yet or not.