Exciting news! Thirdera has been acquired by Cognizant.

Contact Us

From Data Privacy to Governance: Navigating the GenAI Landscape

Generative AI (GenAI) has dominated 2023 tech conversations. In recent months, we've seen the introduction of advanced generative services that allow us to create text, edit images, summarize information, and even write code. 

One tool, ChatGPT, has been the primary focus of GenAI discussions since its November 2022 launch by OpenAI. It is now difficult to find someone in the tech industry who hasn't experimented with ChatGPT or similar generative AI. 

These powerful capabilities raise an important question - how can we apply GenAI in the workplace? In this blog, we will focus on four key areas to prepare for effective GenAI implementation: 

  1. Data privacy 
  2. Training 
  3. Quality Control 
  4. Governance 

 

Data Privacy 

Before implementing a GenAI tool or in fact any AI or Machine Learning (ML) capability, you need to understand how the data is being handled. These technologies thrive on the data they ingest to refine their accuracy. It's crucial to grasp the implications: When you send a query to a GenAI system, you're effectively exporting your data beyond the company walls, enriching the model's knowledge repository. 

According to data from Cyberhaven’s product, as of June 1, 10.8% of employees have used ChatGPT in the workplace and 8.6% have pasted company data into it since it launched. 

So, before implementing an AI or ML solution, pause and ask yourself: 

  • Where does the service reside? 
  • What happens to the data being sent? 
  • Can we segregate the data so it can only be used for our own models? 

These questions form the bedrock of responsible AI adoption, safeguarding your organization's data integrity. 

 

Training 

With a heightened awareness of data security's implications in the AI landscape, it's crucial to navigate the second pillar of responsible AI adoption: workforce training. 

Any employee today can effortlessly extract spreadsheet data and feed it into ChatGPT for instant summaries. Or use it to rapidly craft communications for a product launch. The allure of these possibilities is undeniable, yet it's essential to remember that with great AI power comes great responsibility. 

This leads us to a critical question: How do we educate our workforce about the inherent risks of unauthorized AI tool usage in the workplace? 

Forward-thinking companies like PeopleReign have taken proactive steps, offering specialized training content. Their programs, such as "AI for IT Leaders" and "AI for Developers," empower teams to harness GenAI tools safely and effectively. 

 

Quality Control 

Even the most advanced AI still lacks perfect judgment. As we implement generative models, quality control becomes paramount.  

  • Text summaries may miss the intended tone
  • Code can function yet clash with architecture
  • Images might meet needs but not align with branding

We must carefully evaluate confidence levels for each generated artifact. Completing the task is not enough. The output must uphold standards only discernible by human eyes. 

For company-specific content, it may take months of extensive training to instill models with adequate contextual awareness. Until then, we must verify outputs diligently. 

AI's limitations underscore the irreplaceable value of human creativity and intuition. The technology is a tool, not a truth. It complements our abilities rather than defines them. 

 

Governance 

The final subject we want to cover today and perhaps the most important one is governance. Governance is essential to successfully implement AI that meets standards. To set ourselves up for responsible AI adoption, we must establish: 

  • Approved tools aligned with our security and privacy needs 
  • Processes to carefully vet and integrate new technologies 
  • Confidence thresholds calibrated to use case and risk 
  • Quality controls enacted by empowered reviewers 
  • Policies distinguishing external vs. internal usage

With thoughtful governance, we steer AI to augment our workforce rather than replace it. We define where automation excels, and where final human judgment remains vital. 

In short, governance is how we uphold ethics. It enables AI on our terms to meet our values. For AI to serve our highest potential, governance must not constrict innovation - but rather channel it wisely. 

 

Harnessing GenAI's Transformative Power 

In conclusion, as we navigate the evolving landscape of GenAI and its transformative potential in the workplace, it's crucial to recognize the immense value it can bring to ServiceNow customers and businesses at large. At Thirdera, our commitment to harnessing the power of intelligence-based solutions extends beyond just GenAI. We are actively investing in AI technology to assist our customers comprehensively, particularly in the realms of roadmapping and strategic planning. By prioritizing data privacy, training, quality control, and governance, we enable organizations to harness AI's extended capabilities responsibly and strategically. As we continue to explore the possibilities of AI, we look forward to partnering with you to drive innovation, enhance productivity, and achieve your business goals. 

Contact Us

WRITTEN BY

Martin Palacios

Martin is a ServiceNow professional with over 13 years experience and an excellent track record of managing applications and development teams. As the Practice Director for Thirdera Digital, Martin is responsible for leading projects around architecting and delivering applications across various ServiceNow applications, and helping customers achieve greater value from ServiceNow.
[blog, servicenow-ai] [Blog, ServiceNow AI]