Degreed Experiments Archives - Degreed https://ca.degreed.com/experience/blog/tag/degreed-experiments/ The Learning and Upskilling Platform Tue, 18 Nov 2025 23:14:23 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 AI-Generated Content, Coaching, and Interactive Data https://degreed.com/experience/blog/ai-generated-content-coaching-interactive-data/ Tue, 18 Nov 2025 23:14:20 +0000 https://degreed.com/experience/?p=87479 Instead of waiting to see what the future of learning looks like, we’re creating our own. It’s what the Degreed AI Experiments Lab is all about, and I want to give you a new sneak peek into that reality. Let me take you on the journey of future capabilities we’re exploring, including: 1. AI-Generated Content […]

The post AI-Generated Content, Coaching, and Interactive Data appeared first on Degreed.

]]>
Instead of waiting to see what the future of learning looks like, we’re creating our own. It’s what the Degreed AI Experiments Lab is all about, and I want to give you a new sneak peek into that reality. Let me take you on the journey of future capabilities we’re exploring, including:

  1. AI-generated content
  2. Customized feedback and coaching moments
  3. Surveys, data, and debrief conversations

1. AI-Generated Content

Let’s start with multi-modal learning content generation. We’re exploring ways that you can use AI to help generate content or use your existing documents or Sharable Content Object Reference Model (SCORM) files as the starting point. From there, it can be quickly transformed into learning resources of any length or format. You can edit the content produced, with multimedia options for text, images, graphics, and videos—or even slides.

This is an easy way to keep fit-for-purpose learning content engaging, diverse, and always relevant. And it’s one we plan to launch in early 2026.

2. Customized Feedback and Coaching Moments

Degreed Maestro is about more than conversations with AI. It’s about creating high-impact, comprehensive learning experiences. To do that, we’re exploring multi-step AI experiences that combine multiple formats to provide the learner with opportunities for improvement, such as customized feedback or mini coaching moments.

For example, after practicing a sales call with Maestro, it would provide scores and feedback based on my performance, showing me what I did well and what I need to improve. It would also provide mini coaching moments or a chance to replay and practice the specific things I need to work on. 

3. Surveys, Data, and Debrief Conversations

We’re also excited about a new way to use Maestro through natural, AI-powered debrief conversations. These encounters can drive learning and reflection while surfacing valuable insights along the way. 

Instead of formal surveys that produce fatigue and rushed, incomplete answers, Maestro can weave smart questions into everyday conversations or draw insights from existing ones with no extra effort required. In these settings, people tend to share more openly and in greater depth than they would in a traditional survey, especially when they know their responses can remain confidential. 

In one example, we asked employees how they’re using AI in their roles via a quick conversation with Maestro. Maestro gathered the responses and created a live dashboard to aggregate the results. From there, we could even chat with it about the data to explore further trends. 

This approach makes it straightforward to establish a baseline understanding of an individual employee’s skills, needs, and experiences, to then tailor learning to individual needs. The measurement of impact available afterward uncovers a depth and richness of insight that’s simply out of reach with traditional methods. It’s real-time understanding that was previously invisible. 

Stay Updated

Imagine what you could achieve with that level of clarity about employees, their needs, and the impact of your learning programs. We’d love your feedback as we keep exploring, so follow me on LinkedIn or sign up for our AI Experiments Lab newsletter to stay updated on our latest tests.

The post AI-Generated Content, Coaching, and Interactive Data appeared first on Degreed.

]]>
Degreed Experiments: Unlocking Hands-On, Adaptive Learning https://degreed.com/experience/blog/degreed-experiments-unlocking-adaptive-learning/ Wed, 16 Jul 2025 13:01:55 +0000 https://degreed.com/experience/?p=86243 Testing AI-driven adaptive learning experiences and getting feedback uncovered how to make progression always feel customized and "just right."

The post Degreed Experiments: Unlocking Hands-On, Adaptive Learning appeared first on Degreed.

]]>
One of the most compelling aspects of Degreed Maestro’s conversational voice AI is its ability to create bi-directional, personalized, and adaptive learning experiences. Yet, we know voice isn’t the ideal interface for every learning task.

Voice AI falls short when:

  • You need high fidelity for creating or inputting complex information.
  • Visual referencing is crucial.
  • You need time and space for in-depth problem-solving.
  • Tasks involve non-speaking elements, like typing code or navigating a software interface.

This raised a key question for us: Could we harness the best of voice—its interactivity and adaptiveness—and apply it to non-voice contexts?

Our Latest Experiment: Adaptive Learning Exercises

Our answer became an innovative approach to “learn by doing.” The concept is simple: you start with a learning goal, which is then broken down into progressive requirements or milestones.

From there, AI generates micro-tasks, one at a time, to guide your understanding and practice. You receive immediate, personalized feedback after each attempt, and the next task dynamically adjusts based on your progress and comprehension. 

Early Feedback

  • “I thought it was intuitive. I liked how it made you do something after each explanation and task description.” 
  • “Overall, it was smooth. The feedback it gave me on my responses was helpful.”

Diversifying Modalities 

Hands-on interaction was critical. We began with text input for various tasks, then expanded to support a code editor for more technical applications.

Next, we integrated webcam and screen recordings. The screen recordings, in particular, proved invaluable, allowing early testers to demonstrate their abilities directly within the context of their work or specific applications. 

Early Feedback

  • “I like how it had me not only write but record verbally. What I was reinforcing was what I learned. So those are very good.” 
  • “It was definitely interactive and engaging.”

Finally, we added multiple-choice questions because constantly requiring text input can feel burdensome; these questions offer a lighter way to confirm understanding.

With this diverse array of modalities, AI can select the most appropriate format for each task and sequence them progressively, effectively managing cognitive load. In practice, this often means starting with multiple-choice questions to confirm foundational understanding, moving to text or code input for hands-on application, and concluding with webcam or screen recordings to demonstrate mastery.

Creating Tailored Instruction

A significant challenge was finding the sweet spot for instructional support: enough to prevent frustration, but not so much that it created noise. Our current solution involves several configuration options:

  • Hints – Available on demand
  • AI-Powered Q&A – For real-time clarification.
  • Instructional Depth – Customizable from none, to basic, to in-depth.

We also include a “rabbit hole” icon on each task, allowing early testers to deep-dive into specific learning resources for additional context or explanation, if and when they need it. 

Navigating the Nuances 

Making the process truly adaptive came with its own set of challenges. If too open-ended, learners lacked clear expectations regarding time or effort. We opted for an initial structure combined with an adaptive path, ensuring progress was always visible.

Measuring progress against requirements was another hurdle, especially when mastery might take several attempts across an undetermined number of tasks. Achieving the “just right” feeling for adaptiveness required extensive iteration. The system needed to ease up when a learner struggled, chunk tasks into manageable sizes, build upon prior knowledge, and align with examples and instructions without removing the challenge entirely.

From Prototype and Beyond: The Journey Continues

This experiment will continue to evolve—thanks to feedback from more than 50 early testers (thank you!). We anticipate that admins and curators will design learning objectives, integrate these adaptive experiences into broader learning pathways, and benefit from robust reporting and insights. Per early feedback, a core component of this will be the ability to upload existing documentation or training materials to automatically generate and customize learning requirements by leveraging your organization’s unique knowledge and an employee’s unique work. 

As one early tester shared, it’s “a great tool. It’s opened my eyes to how companies can adopt it with proprietary knowledge to really help… an online assistant that will help in real time.”

Ultimately, this experiment has proven to be an exceptionally flexible, engaging, and effective learning tool. Its ability to provide immediate, tailored feedback without increasing administrative burden is invaluable. The AI-driven adaptive progression ensures the difficulty always feels “just right,” while optional deeper instruction empowers learners to customize their support. 

Early Feedback

  • “For people who want to learn on the job and quickly need to understand something, yep, absolutely valuable.”

What’s Next for Adaptive Learning?

The success of this prototype has naturally sparked exciting, new “what if” possibilities for future development, including:

  • Adaptive Assessments: Reimagining how we measure skills and knowledge.
  • Adaptive Learning Support & Motivation: Moving beyond adapting instruction and tasks, we envision personalizing the way we support and motivate learners throughout their journey.
  • Adaptive Workflows: Instead of screen recordings, which carry data sensitivities, can AI generate copy-cat workflows optimized for practice without real-data risks? 

Get Involved

If you’re interested in experiencing this prototype firsthand, you can:

The post Degreed Experiments: Unlocking Hands-On, Adaptive Learning appeared first on Degreed.

]]>
AI Agents for L&D: Innovating Across Your Ecosystem https://degreed.com/experience/blog/ai-agents-for-ld-innovating-across-your-ecosystem/ https://degreed.com/experience/blog/ai-agents-for-ld-innovating-across-your-ecosystem/#respond Tue, 10 Sep 2024 04:35:23 +0000 https://explore.local/2024/09/10/ai-agents-for-ld-innovating-across-your-ecosystem/ There’s always something new going on in AI, but “AI agents” is a macro trend I think everyone in business needs to be ready for—and especially anyone focused on impactful learning programs. “The next big breakthrough in AI is AI Agents,” notes Aaron Levie, CEO of Box. “This is when AI goes from being used […]

The post AI Agents for L&D: Innovating Across Your Ecosystem appeared first on Degreed.

]]>
There’s always something new going on in AI, but “AI agents” is a macro trend I think everyone in business needs to be ready for—and especially anyone focused on impactful learning programs.

“The next big breakthrough in AI is AI Agents,” notes Aaron Levie, CEO of Box. “This is when AI goes from being used as an assistant to chat with to using AI to accomplish complete tasks that a human might otherwise have to perform. This moves AI from being a ‘read-only’ operation to fundamentally a ‘read/write’ operation. Ultimately, this brings us much closer to the full promise of AI, in particular in the enterprise, where AI can begin to complete any part of a workflow.”

Here’s a closer look at what AI agents are, and what they mean for learning.

What are AI agents?

More than chatbots, AI agents are systems that have autonomy to make decisions and take action. Given a goal, they can make plans, execute tasks, and re-evaluate when necessary.

Let’s break it down with some examples of different levels of AI autonomy or “agentic” abilities. Similar to AI’s capabilities, naming conventions are all over the place. The following is a basic framework my team created to help us make sense of these capabilities. Level 1 is the most basic, and level 4 is the most advanced. Note: This is not meant to be definitive. In addition, categories and definitions can certainly bleed into each other.

Level 1: AI Tools 

You grab a tool from your toolbox. It does its job, and then you put it back. Similarly, AI tools are good at completing specific, narrow tasks that you define. These tools are seemingly everywhere these days, and they’re probably what most people have the most exposure to. 

AI tools include: 

At Degreed, we’re building AI tools such as Degreed AI Skill Review for analyzing your skills and other tools to help you maintain your learning content and enhance your learning ecosystem.

Level 2: AI Assistants

I don’t have an AI assistant, but I understand they’re great at augmenting your ability to get things done—like setting up meetings and completing other basic tasks. A good AI assistant knows your preferences and working style and can leverage multiple tools. This is where we see more autonomy in selecting the right tool to perform the right action.

AI assistants include:

Our Degreed AI Assistant helps you find content, build a Pathway, and complete tasks in Degreed.

Level 3: AI Copilots

More than an assistant, a copilot takes the controls alongside you. You work together, side by side, to do the job. While not fully independent to do the work by itself, an AI copilot teams up with you to amplify the capabilities of a human and a machine.

AI copilots include:

  • Github Copilot: for writing code
  • MultiOn: for navigating the web and interacting with websites alongside you

At Degreed, we’re exploring copilots that can help you build and manage learning programs and AI tutors that can look over your shoulder to give you real-time feedback as you work to improve your skills. 

Level 4: Multi-Agent Supervisors

At this level, AI helps manage other AI agents, each with specific capabilities. AI supervisors engage in planning, delegation of responsibilities, and evaluation of work by other AI agents to ensure they’re achieving the desired outcome. In these ways, supervisors exercise the most autonomy and have the most “agentic” of capabilities. It’s a level of AI we’re only beginning to see emerge.

Multi-agent supervisors include:

  • Crew AI: for building multi-agent systems
  • Strawberry: A yet-to-be-announced GPT-5 from OpenAI, codenamed Strawberry and rumored to have supervisor capabilities *

At Degreed, we’re exploring leveraging an AI supervisor to help build engaging learning experiences. This experimental Degreed Pathway supervisor coordinates amongst other agents to plan content, create content, find resources, extract quotes, do quality assurance, and ensure the Pathway is built to achieve its objectives. Next steps would be optimizing the experience based on real user activity.

What This All Means for L&D 

First, all this innovation holds the promise of quickly improving L&D workflows. With AI agents, the end goal is to do more than simply use AI to do what you currently do faster and cheaper. However, that’s a good place to start. Why? Because applying AI to what you currently do is the simplest and easiest way to understand AI technology, its capabilities, and its limitations.

Second, start planning on inventing entirely new ways of doing things. The ways that we craft learning experiences, evaluate skills, and navigate careers are all ripe for reimagination. Find technology partners who are ready to support your internal R&D efforts. If you’d like to get more involved with Degreed Experiments, an initiative I lead, please email me. You don’t need to be a Degreed client!

Third, keep people at the center. The best way to navigate the limitations of AI and the AI anxiety of your workforce is to help facilitate AI and employees working together. For example, a learning design copilot can help craft visuals and manage translations, allowing your learning designers to strategize and align experiences to employee’s needs. The best way to advocate for the people in your company, and keep them at the center, is to be a credible voice about the possibilities, and limitations, of AI and AI agents.

At Degreed, we want to help equip you to navigate these exciting developments. Get involved by checking out our AI experiments, learning more about navigating AI, or sending us your ideas for future experiments.

Join us as we unravel the future of AI agents and how we can get work done, together.

Find out more.

Degreed Experiments Blog Banner

* Technology is advancing quickly, and examples of AI tools and their capabilities can quickly become outdated.

The post AI Agents for L&D: Innovating Across Your Ecosystem appeared first on Degreed.

]]>
https://degreed.com/experience/blog/ai-agents-for-ld-innovating-across-your-ecosystem/feed/ 0
Internal Talent Mobility: Using AI to Find Overlooked Skills https://degreed.com/experience/blog/internal-talent-mobility-using-ai-to-find-overlooked-skills/ https://degreed.com/experience/blog/internal-talent-mobility-using-ai-to-find-overlooked-skills/#respond Wed, 24 Jul 2024 17:15:57 +0000 https://explore.local/2024/07/24/internal-talent-mobility-using-ai-to-find-overlooked-skills/ Something interesting happens when you start working at a company. People start to see you as your job title. It becomes difficult for them to imagine you doing anything other than your current role. Your previous experiences and your wider skill set don’t factor much during the day to day, so those considerations quickly become […]

The post Internal Talent Mobility: Using AI to Find Overlooked Skills appeared first on Degreed.

]]>
Something interesting happens when you start working at a company. People start to see you as your job title. It becomes difficult for them to imagine you doing anything other than your current role. Your previous experiences and your wider skill set don’t factor much during the day to day, so those considerations quickly become a non-factor.

This problematic perception becomes one of the biggest reasons why company after company overlooks hidden skills that could be incredibly useful for enabling internal talent mobility. It’s also one of the reasons why so many business leaders are excited about skills-based talent practices. Skills-based strategies help organizations see past job titles, resumes, and credentials. They help organizations benefit from what their employees can actually do. But good data is key.

A skill-based approach can backfire if you don’t have good skill data. Incomplete data often contributes to the overlooking of quality internal talent. Why? They simply don’t have comprehensive skill profiles.

The Experiment: Strengthening Internal Talent Mobility Using AI

We hypothesized that AI can use inference to help fill skill data gaps and broaden a search for talent.

To set up our experiment, we created skill profiles (with 12 or more skills) for six employees: Taylor, a product manager (Hey, that’s me!); Adrian, a back-end developer; Jessica, a client success manager; Anne, a sales director; Quyen, a technical support specialist; and Stephen, a data analyst.

We then used AI to build a list of required skills for a new internal data analyst opportunity.

Today, most talent systems are looking for exact skill matches to recommend employees for new opportunities. This is what the talent pool looks like when an exact skill match for the data analyst position is performed.

Basically, Stephen (who is already a data analyst) looks to be the only person remotely qualified. This result is neither surprising nor insightful. And in today’s fast moving market, companies, managers, and employees need unexpected, innovative, and flexible solutions to fill skill gaps internally. 

But, using AI, we asked the system to also highlight people who have adjacent skills. After all, just because someone doesn’t explicitly have a skill listed on a skill profile doesn’t mean that person doesn’t actually have that skill. And voila! This is what our talent pool looks like when adjacent skills (using AI inference) are considered. 

Including adjacent skills creates an 89% increase in potential skill matches. We now see that Taylor (a product manager) and Jessica (client success manager) have adjacent skills that would help them qualify for this new position. Now we’re getting the more insightful and unexpected solutions that companies, managers, and employees need. 

Let’s take it one step further. We know that skills aren’t static. Increasingly, we’ll all need to upskill to fill skill gaps. So let’s use AI inference to highlight stretch skills that someone could upskill into.

Look at all that color. Accounting for development opportunities provides an even richer talent pool, resulting in 179% more potential matches than our original analysis. Quyen (technical support specialist) was not initially on our radar for this role, but we can now see that she has a path to upskill herself.

AI inference isn’t the end-all-be-all, but it’s a good place to start.

Are there limits to this use case? Sure. Just because AI infers that someone may have a skill doesn’t make it true. But it does help broaden our search, and it doesn’t exclude potentially qualified candidates right from the get-go. And this information can lead to further analysis—through follow up conversations or skill assessments—providing hiring teams with a wealth of potential options for internal talent mobility before otherwise defaulting to an external search.

Turn the talent you have into the talent you need.

Considering how challenging it can be to hire good talent, or to develop it from scratch, there’s no reason why learning leaders shouldn’t be amplifying the internal talent they already have. And if there are still some gaps at your organization, Degreed can help to turn the talent you have into the talent you need. 

Take a look at Verizon. When COVID lockdowns began, the wireless network operator, a Degreed client, temporarily closed nearly 70% of its retail stores essentially overnight. In doing so, Verizon looked at the skills of over 20,000 workers and redeployed them to serve other critical business needs. Through training, daily on-the-job coaching, and rapid reskilling, the company avoided layoffs and met the needs of customers faster and more efficiently. 

Exness, a Degreed client and Cyprus-based fintech company, found similar success clarifying job roles by mapping skills to them, which in turn enabled the organization to embrace a focused, adaptive approach to employee development. Skill gaps are clearly identified, learning needs are proactively defined, and upskilling happens, contributing to measurable business outcomes like improved performance and innovation.

Find out more. 

Thanks for following along with our experiments! You can check them all out, and be sure to watch for new experiments coming soon. (For example, would you like to see if AI can help estimate the time and cost for any employee to upskill to a certain level? Let’s find out!)

Degreed Experiments Blog Banner

The post Internal Talent Mobility: Using AI to Find Overlooked Skills appeared first on Degreed.

]]>
https://degreed.com/experience/blog/internal-talent-mobility-using-ai-to-find-overlooked-skills/feed/ 0
Try Our Latest Experiment: Conversational-AI Skill Review https://degreed.com/experience/blog/try-our-latest-experiment-conversational-ai-skill-review/ https://degreed.com/experience/blog/try-our-latest-experiment-conversational-ai-skill-review/#respond Thu, 11 Jul 2024 18:12:58 +0000 https://explore.local/2024/07/11/try-our-latest-experiment-conversational-ai-skill-review/ When it comes to understanding someone’s skill level, it’s hard to beat just sitting down and having a conversation. This is why we’re eager for you to try our latest experiment—conversational Skill Review. It’s a new, AI-enabled take on one of Degreed’s longstanding skill rating tools that gives people a more detailed and accurate reflection […]

The post Try Our Latest Experiment: Conversational-AI Skill Review appeared first on Degreed.

]]>
When it comes to understanding someone’s skill level, it’s hard to beat just sitting down and having a conversation. This is why we’re eager for you to try our latest experiment—conversational Skill Review. It’s a new, AI-enabled take on one of Degreed’s longstanding skill rating tools that gives people a more detailed and accurate reflection of their skill levels. But don’t worry, no Degreed account or previous experience is necessary to try it out.

This new experiment combines two of our key interest areas: AI-enabled interactions—instead of filling out a questionnaire, you get to answer questions with your voice in real-time—and better skill data. You can try it now or read this post and try it after.

A Continuation of Our Skill Review Tradition

Degreed has seen the value in understanding and representing people’s skills for a long time. For years we’ve offered multiple ways for people to rate skills through self-ratings, manager ratings, and peer ratings. Degreed also gathers skill “signals” from assessments, ratings, and learning activities. Rounding out this list is our Skill Review questionnaire, a mainstay of the Degreed platform since 2018.

Skill Review was a project I helped manage several years ago, so I’m happy for a chance to revisit it. When we created it, the intent was to have something more accurate than a self-rating, and required a little more time and effort. The original Skill Review takes about 10 minutes; the AI-enabled version should take about half that.

Embracing a More Casual Skills Conversation

With the introduction of conversational AI (the technology we recently used for our experimental AI executive coach), we wanted to see if there are new and better ways to review and rate someone’s skill levels. We knew from previous experiments that conversational voice interactions were great for reflection. We also knew that AI was very capable of assessing a transcript based on a rubric. We wanted to see how well AI would be able to adapt to carrying a casual conversation about a person’s skill while gathering information to provide a skill level rating.

The results from our early tests are impressive: The experience is proving quicker and easier than its predecessor. It’s also proving to be quite accurate.


Have you tried it yet? Not only will the conversation help rate your core skills, but it will also help you uncover transferable skills you don’t know you have. For example, you might not be a project manager, but you can still do a Skill Review for “project management” to see if you’ve been unknowingly demonstrating project management capabilities in your work.

After your conversation about your experience in the skill, the system will analyze the conversation and generate a report on your skill level.

Let us know what you think! We’ll be gathering feedback and the results from our testing and will publish them in the near future. We believe in the potential for this new technology to help people identify their skills and be recognized for them. Stay tuned for more!

Degreed Experiments Blog Banner

The post Try Our Latest Experiment: Conversational-AI Skill Review appeared first on Degreed.

]]>
https://degreed.com/experience/blog/try-our-latest-experiment-conversational-ai-skill-review/feed/ 0
AI-Generated Content: How Companies Can Avoid the Slop Ahead https://degreed.com/experience/blog/ai-generated-content-how-companies-avoid/ https://degreed.com/experience/blog/ai-generated-content-how-companies-avoid/#respond Thu, 06 Jun 2024 12:33:46 +0000 https://explore.local/2024/06/06/ai-generated-content-how-companies-avoid/ An explosion of mediocre AI-generated content at work will harm employees trying to learn, grow, and be productive in their jobs. Find out how to avoid it.

The post AI-Generated Content: How Companies Can Avoid the Slop Ahead appeared first on Degreed.

]]>
A giant collection of floating debris called The Great Pacific Garbage Patch covers a huge swath of the North Pacific Ocean. An environmental tragedy, it’s a grim reminder of humanity’s overreliance on single-use plastics.

We’re headed for a future of AI-generated garbage patches. AI-generated content can be helpful, making new information more accessible in new modalities. However, an explosion of mediocre content at work can harm employees trying to learn, grow, and be productive in their jobs.

The Great Garbage Patch Image by AFP

We all know spam. Get ready for slop.

My favorite new phrase for unhelpful, AI-generated content is “slop”. Unwanted email is spam, unwanted AI-generated content is slop. And you better believe there’s a lot of slop coming our way.

One type of slop increasingly barrelling our direction is the AI-generated summary. Google recently added AI summaries to its search results, causing some hilarious (but problematic) situations like this one:

Ai-Generated Content Summary Problematic Example

AI is increasingly capable of generating text, images, videos, and music out of seemingly nothing. AI has created an estimated 15 billion images in the past 1.5 years. It sounds like a lot, and it is. To give some perspective, AI generated about the same amount of photos in 1.5 years that photographers took in the last 150 years.

We saw a similar phenomenon during the last 15 years with the rise of user-generated content, but AI-generated content will be even bigger in scale. Another source suggests that 90% of the internet’s content will be AI-generated by 2026.

At such a massive scale, what does this dump of content mean for workers and companies? This explosion of content will have dramatic consequences on how employees find information–whether it’s to guide their learning or to help them find resources to perform in their roles.

The Risks of AI-Generated Content in the Workplace

An explosion of mediocre content will make it harder to find high-quality, authoritative sources. Let’s look at some of the specific workplace challenges we should expect with more AI-generated content:

1. AI hallucinates and can provide misinformation.

AI has an incredible poker face—you really can’t tell when it’s bluffing. AI can struggle with topics when online dialogue among individuals doesn’t align with evidence-based research. It also struggles to effectively handle new and cutting-edge topics. This means that we will continue to need authoritative sources.

Example: If your employee searches “What are the best cybersecurity tips?” your AI assistant may recommend out-of-date practices that put your company’s data at risk.

2. AI is bad at holistic understanding.

When AI is asked a question about a collection of documents (like those found in your Google Drive or Microsoft OneDrive), it uses a method called retrieval augmented generation (RAG).

That is a fancy way of saying it does a search across the documents. It looks for keywords similar to your query, pulls chunks of information from documents, and then sends it to the LLM to generate a response. This means that AI is really good at finding a “needle in the haystack.” In other words, AI finds a detail that matches the query, but is bad at connecting dots and seeing the big picture.

Example: If a manager asks an AI assistant to summarize the key themes of a resources, it will struggle if those key themes are not explicitly called out.

3. AI summaries remove important context.

AI can present a quick answer to almost any query. The problem is that the answer, which may have been sourced from internal documents, lacks the surrounding context of the author, their background, the date it was published, the context in which the answer was provided, etc. This means you could get information that is low quality or out of date and not realize it.  

Example: If an employee searches for “What is the latest sales forecast?” your AI assistant could just as likely find a document from five years ago that uses the term “latest sales forecast.”

How to Manage the AI-Generated Content Dump at Work

Managing AI Search and Content with the LXP

Despite all the challenges, on-demand summaries and answers generated by AI are just too convenient to resist. The time savings they promise and provide are too great. Fortunately, L&D leaders can maximize these new AI content capabilities and mitigate the risks with the help of a Learning Experience Platform (LXP).

And this isn’t the LXP’s first content clean-up rodeo. Around 12 years ago, the Learning Experience Platform emerged in response to a similar challenge with an explosion of online learning resources. Companies needed a technology that could connect and curate learning content most relevant to individual employees, the teams and departments those employees work within, and entire organizations.

For over a decade now, the LXP has filled and grown beyond it’s content consolidation machine role. And today, the LXP can help navigate AI-generated content by providing context, curation, and canonical sources.

The LXP is adapting to help tackle these challenges brought on by AI:

  1. The LXP offers canonical resources from authoritative sources. Not only will these authoritative sources help provide credible and accurate resources, but having canonical resources help create shared experiences and a shared point of reference. 
  2. The LXP provides a system of curation that makes it easy to know what to pay attention to. Curation provides clear guidance and helps avoid too many choices.
  3. The LXP can surround resources with the necessary context. That context may be a note from your manager or the ability to see how certain skills align to key roles and initiatives in your company. 
  4. AI agents tasked with reviewing, cross-checking, and cleaning up content will become plentiful. If AI is getting us into this mess, the least it can do is help clean it up. Soon you’ll be able to get AI to do the heavy lifting when it comes to maintaining your content—so it remains trustworthy.

Combining the Benefits of AI Chat Assistants with the LXP

Employees might choose to search for a learning app to find what they need or use an AI chat assistant for convenience. In the future, tech vendors and organizations can connect and integrate with chat assistants to guide people to curated, trustworthy, and authoritative sources.

The Great Garbage Clean Up

The time to be proactive is now.

While it may be easier to create junk than to clean it up, dedicated tools and a thoughtful approach can keep the slop out. Let’s not wait until the land of learning is riddled with garbage before we start cleaning it up.

If you want to explore the challenges and integration with AI-generated content in your learning system we’d love to chat with you. Email tblake@degreed.com

Degreed Experiments Blog Banner

The post AI-Generated Content: How Companies Can Avoid the Slop Ahead appeared first on Degreed.

]]>
https://degreed.com/experience/blog/ai-generated-content-how-companies-avoid/feed/ 0
Conversational Voice AI for L&D: Coaching, Role Playing, and More https://degreed.com/experience/blog/conversational-voice-ai-for-ld-coaching-role-playing-and-more/ https://degreed.com/experience/blog/conversational-voice-ai-for-ld-coaching-role-playing-and-more/#respond Fri, 24 May 2024 21:37:45 +0000 https://explore.local/2024/05/24/conversational-voice-ai-for-ld-coaching-role-playing-and-more/ Okay, listen. I wrote this entire blog post about an AI executive coach we’ve been experimenting with and thought, “Anyone reading this will probably just want to try it out first.” Check it out! It’s not perfect, but it is kind of fun. (We’ll pay for the token usage, so don’t chat all night.) We’ll […]

The post Conversational Voice AI for L&D: Coaching, Role Playing, and More appeared first on Degreed.

]]>
Okay, listen. I wrote this entire blog post about an AI executive coach we’ve been experimenting with and thought, “Anyone reading this will probably just want to try it out first.”

Check it out! It’s not perfect, but it is kind of fun. (We’ll pay for the token usage, so don’t chat all night.) We’ll be leaving the sign-up open for a few days.

When you’re done chatting, come back and read the rest of this post. And with that, back to our regular programming:

Yes, the robots can talk.

OpenAI announced on May 13 that a new conversational mode will be released in the next few weeks.

These improved capabilities becoming ubiquitous will make chatting with AI verbally a regular way we interact with technology. So, what will it mean for L&D? And how did we get here? After all, voice assistants aren’t new. So what’s the big deal?

We conducted a few experiments to find out, gathering input from L&D pros along the way. Join us for a look at the results—and the implications. It’s looking more and more like conversational AI will have a significant impact on some key aspects of L&D.

First, a Quick Technology Summary

Voice assistants like Apple Siri and Amazon Alexa have been around for a while. They use natural language processing (NLP) to take a request and match it to canned responses. This means they’re helpful for checking the weather, but also, as Microsoft CEO Satya Nadella said in 2023, they’re “dumb as a rock.” They don’t have the dynamic or generative abilities of a large language model (LLM) like that used by ChatGPT.

ChatGPT changed the game. In 2022, Whisper was introduced as a complement to ChatGPT-3.5, providing users the ability to convert audio into text. This allowed users to speak to ChatGPT, which could then read responses back at the click of a button. Audio and voice became usable, but the technology still lacked the ability to interrupt, be interrupted, or have a real conversation without strict instructions to take turns.

Newer startups have enabled more conversational interactions on top of LLMs. They introduced the ability to automatically detect turn taking, allowing for interruptions and speaking freely back and forth. They also added nice interjections like “mmm hmms,” which occurred while the AI was listening. Further, they analyzed vocal expressions. Some latency still happened in these experiences as they were essentially a multi-step process; an LLM like ChatGPT-4 creates a response and then a voice agent, a separate technology, speaks the response.

Then this month, OpenAI announced that ChatGPT-4o, the most recent version of the company’s generative AI chatbot, will be able to natively understand and reply conversationally. This means that there will no longer be a voice agent reading responses from the LLM. The LLM will speak. If you tell the AI to “slow down” or “pretend to be a character,” it will. It will also be naturally vocally expressive and understand users’ vocal expressions. Because it will all be built into the same system, it will be faster than anything that’s come before.

To date, a text model of GPT-4o has been released. The most advanced voice capabilities (including a controversial Scarlett Johansson-sounding voice) have not been released as of this publication. That means that right now, today, you can talk to GPT-4o, which is fast, but you have to tell the system if you want to interrupt (because it’s still using the old voice input-readout technology). You can read details of what is currently available.

Log in today and you might see a screen similar to this when starting voice mode in ChatGPT:

Whew! That was a lot. Let’s talk about what this could all mean for L&D after the floodgates open.

Our Hypothesis: Faster, More Authentic Interactions for Practice and Reflection

We had previously explored chat-based scenarios (via typing) as a form of practice and role play. These interactions were fun at first, but the effort required to make them feel real (while you knew it was an AI) was hard to sustain. It was also weird to play out scenarios via chat that you’d be more likely to have in a real spoken conversation (like a call).

We wanted to add voice capabilities to see if it would make the scenario feel more real and make it easier to engage.

Experiment No. 1: AI Coach

We used GPT-4 Turbo as our LLM and added a conversational layer on top. We then instructed the assistant to act as an executive coach. Prior research has shown GPT-4 to be the most effective at role playing (from a limited evaluation of other models).

This video captures my first experience with this combination:

As you’ll notice, there’s a bit of latency but the conversational ability is impressive. I’m the one having a harder time putting words together.

I shared the link to try out the AI coach with L&D folks in my network for their feedback.

Overall sentiment was positive:

  • “Natural”
  • “Wow!”
  • “Fantastic!”
  • “Realistic”
  • “Smooth”
  • “Something I could see myself using on a daily basis.”  
  • And then there was my wife’s reaction: “That is really freaky.”

Comments on interacting via voice:

  • Conversational flow is very good and most human-like yet.
  • Voice is good for reflection; people are less self-critical over voice because it is linear (they can’t go back and make edits) and they don’t see what they’re out-putting. It felt faster and required less effort.
  • When asking the coach to slow down (when trying to document recommendations), the coach couldn’t.
  • Knew it was AI but became less aware over time.
  • Tone and inflections were good and conversational.
  • Latency was noticed but also called out as not too bad.

Comments on the usefulness of the AI coach:

  • The coach offered ideas and recommendations that were helpful.
  • It prompted real reflection with good questions.
  • Users found the approach and methodology effective.
  • The coach has a habit of reflecting back what the user said (mentioned as a positive and negative).
  • It suggested a role play to practice the recommendation, which was appropriate, but the role play itself felt a bit awkward.

Comments on the user interface:

  • Needs a way to document recommendations (mentioned several times).
  • Wasn’t immediately clear how to start the conversation.
  • May be helpful to have an avatar—to feel like you’re speaking to someone.
  • Needs ways to pause (for reflection and to just step away).
  • Needs to communicate expectations about the length of the experience.
  • Would be helpful to see a transcript, summary, next steps, or resources to revisit at a later time.

Experiment No. 2: AI Role Playing with Expressive Understanding

In this experiment, we wanted to see if expressive understanding and interactions with AI would feel natural. We experimented with a role playing interaction—helping resolve a customer service interaction with an angry customer.

Here’s a short clip:

We haven’t gotten as much feedback on this experience yet, but here are my initial reactions. The role play was effective in that it made me uncomfortable! It was hard. It was stressful. I felt some level of “realness” hearing another person’s upset voice.

But because I knew it was only role play, I knew I could also bail out when I felt stuck or uncomfortable. I would need some level of accountability or assessment to help me persevere. I also learned that I am not cut out for customer service roles!

I had one of our sales leaders try this interaction, and they said that they spent 15 minutes speaking with the AI customer before they got to a good resolution. (It required a change of tactics halfway through.) The sales leader said they felt they had to solve the issue so they could “win.” Sales people are just built differently I guess.

We also tried a coaching interaction with an AI that has expressive understanding, to see if it could detect my emotion without relying on the content of my words. While it was impressive that it could pick up on my sentiment (even if it wasn’t reflected in my words), I wasn’t a fan. Perhaps because I was in testing mode, it felt inauthentic when the AI acted like it understood my feelings. The AI also wasn’t as good at detecting when to jump into the conversation, repeatedly interrupting my discontented ramblings.

Conclusion: Expression analysis is probably more helpful for real, human-to-human interactions.

Experiment No. 3: Faster with GPT-4o

When GPT-4o text mode became available, we decided to revisit the AI coach we created in Experiment No. 1. Text mode is advertised as 50% faster than GPT-4 Turbo, so using it seemed like a great way to find out if we could reduce the latency.

The inclusion of GPT-4o in our AI coach did reduce the latency a bit, as you can see here:

Conclusion: Using GPT-4o, the latency in our AI coach application dropped from an average of 3.6 seconds to 2.2 seconds making the conversation much more natural.

Looking Ahead

We aren’t done experimenting with voice. We’re implementing some of the suggestions we’ve already received from L&D folks about the AI coach (including the inclusion of transcripts, summarization of action items, a better user interface, analysis, and feedback options).

We’ll keep testing the latest LLMs. And, we’ll explore voice for new use cases (maybe something used on the go, something interacted with during meetings, or something to help complete administrative tasks).

Here’s a quick peak at the live transcript work:

Takeaways for L&D

As consumer technology gets more advanced, it puts even more pressure on the experiences L&D creates. With this in mind, what does the advent of conversational voice AI mean for L&D professionals?

  • Voice AI isn’t great for everything, but it seems well-suited for certain use cases (like skill development). Figure out what those are for your audiences and find appropriate solutions.
  • Voice AI will let L&D reach more people with better experiences at less cost, but it will likely also create a premium for real, human interactions.
  • Certainly, effective coaching requires more than what our experiment offered. However, we see AI interactions as a great complement to your learning programs.
  • GPT-4o will be able to do almost all of the heavy lifting here, but L&D will still likely need a vendor to provide reporting and analysis as well as  connections to supplementary workflows. 

If you’d like to talk about conversational AI, please send me an email at tblake@degreed.com

Thanks for experimenting with us!

See all the Degreed Experiments.

Introduction: Degreed Experiments with Emerging Technologies

AI Taxonomies for Skills: Actionable Steps for Career Goals

To find out more about chatbots and L&D, check out our companion blog post Chatbots for Learning: Gateway, Guide, or Destination?

The post Conversational Voice AI for L&D: Coaching, Role Playing, and More appeared first on Degreed.

]]>
https://degreed.com/experience/blog/conversational-voice-ai-for-ld-coaching-role-playing-and-more/feed/ 0
Chatbots for Learning: Gateway, Guide, or Destination? https://degreed.com/experience/blog/chatbots-for-learning/ https://degreed.com/experience/blog/chatbots-for-learning/#respond Thu, 16 May 2024 20:36:47 +0000 https://explore.local/2024/05/16/chatbots-for-learning/ Where do chatbots fit in the learning tech landscape? Here are five illustrative examples of using chatbots for learning to show limitations and potential.

The post Chatbots for Learning: Gateway, Guide, or Destination? appeared first on Degreed.

]]>
What’s your go-to resource when you’ve got a question? A few years ago, it most likely was Google. Today, ChatGPT is stealing eyeballs from traditional search engines. Why click through links when a chatbot can provide an instant answer?

More and more, chatbots are the starting point for online activity. They’re showing up almost everywhere: social apps, search engines, browsers, phones.

What does this mean for learning technology? 

Gateways, Guides, and Destinations

Let’s see where chatbots fit in the learning tech landscape by considering where learners go. Whether developing a new skill or discovering a hotel for an overnight trip, learning experiences are made up of gateways, guides, and destinations.

A gateway is where a learning journey begins, a guide is the navigator that directs the journey, and the destination is the place learners ultimately end up.

For example, to book a trip for a vacation, you might start with a Google search (the gateway), compare options on Kayak (the guide), and then reserve a room at a hotel website (the destination).

We can also similarly map out the resources employees use on their learning journeys. Many Degreed client’s learning technology ecosystems are a combination of gateways, guides, and destinations.

In the case of our clients, a learner uses a gateway like an email, mobile app, browser extension, Microsoft Teams, or intranet, to start their journey. That gateway leads them to a learning guide, our LXP. Then the LXP leads to final destinations like an LMS, Coursera, LinkedIn Learning, and other content providers.

Chatbots: Gateway, Guide, Destination Graphic

Of course, learners can’t always start at the destination. This is why gateways are critical in learning technology. The gateway meets learners where they are, maximizing convenience while directing them somewhere else.

The guide helps learners see and compare options (sometimes thousands!), provides curated and personalized recommendations, and manages access rights (authorizations, approvals, permissions, integrations) so learners can get to what they need.

The path learners ultimately take depends on their use cases. And depending on the use case, a chatbot can serve as a gateway, guide, or destination. Let’s break it down further.

Workplace Learning: Chatbots for Learning Use Cases

1. Solving Specific, Work-related Problems

Forgot how to write a function in Excel? Or looking for a reminder on how to update your CRM? You can engage an enterprise chatbot and get immediate answers. When it comes to questions and answers, brainstorming, content generation, or summarization, chatbots are likely your final destination. 

Chatbots for Workplace Learning Example in Excel

2. Completing Compliance Training

Another common use case for learning at work is compliance training. I don’t know about you, but I need to be notified about compliance training. (I’m never going to proactively ask about it.) Along with those notifications, you also need to see the status of your assignments and be granted access to training materials.

Could a chatbot serve as another gateway for compliance training? Technically, yes. Is it likely to be used this way? No.

Learning Platform Screenshot for Workplace Training

It’s unlikely because most employees won’t remember to ask or can’t be bothered to ask. And the one thing a chatbot technology requires? Prompts. And this is a common limitations when companies use chatbots for learning: you have to ask.

3. Navigating a Career Change

Whether you’re a new employee or just got promoted, you don’t know what you don’t know. In this unfamiliar situation, you need a guide. And a chatbot doesn’t always make the best guide.

To be a good guide, the chatbot must know a lot about you, your company, and role. Even if you provide all that information, chatbot technology often struggles directing learners to external resources.

Degreed Screenshot of Learning Platform for Career Change

Unlike a chatbot, a learning system like an LXP can help you navigate your complex transition. A learning system can:

  • Manage your enrollment and send notifications
  • Integrate with a wide range of complementary systems and applications
  • Provide you with multi-step guidance for complex skills
  • Run reports to help you understand what you need to learn

What’s the role of a chatbot in this scenario? If programmed correctly, a chatbot can serve as a gateway to a learning guide like an LXP. A chatbot could also offer general advice or encouraging words of support, thereby acting as a supplementary destination.

Screenshot of Using Learning System to Navigate a Career Change

4. Building New Skills

Upskilling and reskilling is more critical than ever, requiring ongoing engagement, practice, and collaboration. As I work to learn more about AI, I need chances to discuss and apply concepts. And my prompts to chatGPT to “tell me everything I need to know about AI,” won’t provide that practice or collaboration.

Screenshot of Learning Academies Technology Degreed

An academy is likely your guide and destination for building new skills. Why? Because an academy coordinates complex learning journeys, facilitates workflows for practice and feedback, and manages cohorts and collaboration.

And the chatbot? It can play a supporting and complementary role. It can be one of many third-party destinations incorporated into your experience, providing you with opportunities to role play, to reflect, and get feedback.

5. Exploring Development Opportunities

Because a chat interface doesn’t present items to you without you asking, simply presents information as linear text, and probably doesn’t have access to the necessary information, a chatbot is not ideal for browsing, filtering, or exploring development opportunities at your company. 

Your learning and talent systems are better equipped to show you what’s available (no asking required) and use data about you (like your skills) to determine fit.

Screenshot of Exploring Development Opportuniities in a Learning System

However, a chatbot is an effective tool for personal reflection. Simply tell a chatbot what your interests are and it can help you formulate goals, plans, and alternative career options you may not have considered. And after you’ve reflected, you can use the chatbot as a gateway to your organization’s talent systems—to see what types of learning opportunities are available.

A chatbot for learning has limitations.

While chatbots bring new capabilities to the learning journey, they can’t do it all. It’s easy to get caught up in the excitement about chatbots, but it’s worth reiterating their limitations:

You always have to ask. Chatbots don’t provide much in terms of guidance or recommendations. The challenge of always having to figure out what to ask can discourage some people from using them.

Routing and navigation are lacking. While AI aims to please by providing an answer (even if it has to hallucinate), it’s not as good at helping users navigate to other places. While chatbots are getting better at linking to search results, they aren’t adept at surmounting the access and permissions requirements of enterprise resources.

Below is an example of asking GPT-4o (the latest OpenAI model) for a summary of a book. Even though the model has the capability of searching the web, it initially refuses to and instead hallucinates an entirely incorrect answer. The latest models struggle routing requests even among their very limited, native tools.

Knowledge about the user is weak. A chatbot knows what it’s been prompted to do, and it uses the data it’s been trained on to respond to those prompts. New models like GPT-4o update basic memory about the user based on your chat history. But, that data probably doesn’t include the latest activities and information captured by your work and learning applications.

Specialized workflows are undeliverable. While some learning comes in the form of simple answers, other learning needs coordinated experiences a chatbot cannot provide. These more complex workflows require specific contexts, controls, and notifications. The most advanced applications are starting to use a mixture of LLMs to enhance multi-modal, multi-step learning experiences in your learning systems.

Where do chatbots fit in the learning tech landscape? Check out the limitations and potential of using chatbots for learning with five illustrative use cases.

Chatbots: Increasingly Important to L&D

While chatbots can’t do everything, they’re becoming an important gateway to specialized enterprise applications like an LXP or CRM.

For companies that don’t have internal learning resources and licensed providers, chatbots can serve as guides for external learning resources. For companies that do have learning ecosystems, an LXP serves as the guide for all things L&D.

And while chatbots don’t support all the components for in-depth skill development, they’re increasingly a go-to destination for quick answers.

Ultimately, you will need connections both to and from your enterprise chatbots and your learning systems. Neither one will be complete without the other.

If you want to explore integrating your enterprise chatbots with your learning system we’d love to chat with you. Email tblake@degreed.com 

The post Chatbots for Learning: Gateway, Guide, or Destination? appeared first on Degreed.

]]>
https://degreed.com/experience/blog/chatbots-for-learning/feed/ 0
Degreed Experiments with Emerging Technologies https://degreed.com/experience/blog/degreed-experiments-emerging-technologies/ https://degreed.com/experience/blog/degreed-experiments-emerging-technologies/#respond Thu, 18 Apr 2024 18:19:18 +0000 https://explore.local/2024/04/18/degreed-experiments-emerging-technologies/ This is Degreed Experiments: a blog series exploring the sustainability of emerging technologies for the challenges in L&D.

The post Degreed Experiments with Emerging Technologies appeared first on Degreed.

]]>
Honestly, I don’t care much for change. I’d rather be at Blockbuster on a Friday night than sifting through Netflix recommendations. But I do care a lot about how we learn at work. So, for the past year, my mind has been racing thinking about the impact of all these emerging technologies for L&D. What will survive past the hype? How do we use it to make a real impact and not just create cheap imitations? 

We’ve been testing and discussing these topics inside and outside of Degreed. But, we’d like to open up the conversation. 

That’s why we’re excited to announce a new initiative, Degreed Experiments. This will be a blog series (and hopefully a two-way conversation) geared toward exploring the suitability of emerging technologies for the challenges in L&D. Through hands-on prototyping, we’ll share with you what works, what doesn’t, and new questions we encounter along the way. Through it all you’ll become a more informed, and credible partner who can help your business evaluate emerging technologies and opportunities. 

Why Degreed Experiments?

We’ve been building learning products for a long time, but the way we build products is changing. With the incorporation of AI, product experiences are less predictable and deterministic. You don’t really know what an AI-powered product will be like until you can play around with it.

Combine that unpredictability with the future of work and the evolving role of L&D and we have a dynamic that requires rapid experimentation and iteration.

Now, we see all the AI chatter and don’t want to contribute to the noise. We’re determined to give you actual data and examples that can only come from trial and error.

What You Can Expect: Hands-on Prototypes

First, we’re builders. That means there will be more showing as we demo hands-on prototypes, critically evaluate the outputs, and let you judge the results for yourself.

In future posts, we’ll deep dive into emerging technology use cases like:

Upgrading Practice Scenarios from Chat to Live-speech Conversations

Our Hypothesis: Practice scenarios are a powerful way to develop skills in context. Having the option to not only practice in a chat-based scenario but also in a live conversation (like you’re really talking to someone) will make the scenario feel more real. This could be more effective for use cases like sales calls, interviewing, and executive coaching.

Upgrading practice scenarios form chat to live-speech conversations in learning technology

Skill Inference from Employee Data

Our Hypothesis: Leveraging existing employee data and activity can help you quickly identify skill strengths and gaps resulting in better data coverage, more up-to-date profiles, and should be good enough for some use cases (though not all). We also expect the details on how this is done to matter a lot.

Skill inference from employee data in learning technologies

Dynamic Taxonomies for Skills, Tasks, or Other Requirements

Our Hypothesis: Organizing skills, people, and work quickly and flexibly (versus rigid and traditional taxonomies) can speed up building learning experiences, identifying talent, and mapping career paths.

Dynamic taxonomies for skills, tasks, and other requirements in emerging technologies

Smart, Mobile Nudges and Content Delivery

Our Hypothesis: Lots of your existing content could be reformatted into mobile-friendly nudges to better optimize engagement and retention. The micro-learning concept has been overplayed, but we think there are new ways to approach this with emerging technologies.

Smart, mobile nudges and content delivery prototypes

Identifying Internal Talent

Our Hypothesis: We need dynamic ways to match people to internal opportunities. This may include looking at someone’s skills, experience, working relationships, or other key attributes. Those doing the evaluation should be able to change the criteria used for matching as needed to find the best fit.

Identifying internal talent with emerging technologies

Building Personalized Learning Experiences

Our Hypothesis: Learning is more than content. We can design and personalize experiences that are optimized for closing performance gaps.

Building personalized learning experiences prototypes

Side-by-side Model Evaluation

Our Hypothesis: Comparing models and methodologies side-by-side will give us the ability to identify the best fit for any use case.

Side-by-side model evaluation in learning technology

Reviews of the Latest Research on Emerging Technologies

AI is advancing rapidly and chaotically. We’re going to explore the latest technical research and evaluate its impact on L&D. 

Recent and dramatic advancements in emerging technologies will open new use cases for learning and development. 

  • Multimodal: Images, audio, video, and avatars will open new doors (and traps) for content creation and delivery.
  • Evolution of RAG (Retrieval Augmented Generation): The ability to search, summarize, or ask questions from your own source documents are huge use cases. We’re seeing rapid advancements in RAG methodologies, longer context windows, methods for extending memory, and new architectures (beyond transformers used in models like ChatGPT) that will empower knowledge management.
RAG Retrieval Augmented Generation in Learning Technologies
  • Agents: As AI models get more similar and competitive, the next frontier will be the use of agents that can perform chains of actions (including planning and evaluating the work done) using various models. This will open the door for even more automation.
An overview of methods for LLM-agent planning in learning technologies
  • Screen vision: Researchers are exploring AI assistants that can see and act on what’s on a screen, regardless of whether any formal integration between applications exists. This could transform on-the-job training and performance support.
  • Wearables: New AI-wearables are just emerging but could inspire more ways to support deskless workers.

In addition, non-obvious but important limitations exist. While these can be managed or mitigated, it’s still crucial to understand them.

  • Factuality: AI models still struggle with factuality when creating long-form content and providing accurate citations.
  • Long-form reasoning: While AI models do a great job at finding self-contained pieces of information, they struggle to reason across long contexts.
  • Following instructions: Even the most advanced models still don’t fully follow instructions a third of the time.
  • Unpredictability: Not all models are well suited to all tasks. Outputs from models change, and the models themselves are frequently updated in unpredictable ways.
  • Cost: As usage scales, cost will be an important factor. Vendors will look to make tradeoffs between capabilities and cost or pass the cost onto clients.

Finally, there are more strategic implications.

  • Are you prepared to help navigate career mobility when AI disrupts the viability or capacity of certain roles?
  • What functionality should you get from new vendors vs. waiting for it to be incorporated into your existing applications?
  • How will new regulations and audits affect which capabilities you can take advantage of?
  • Which data will AI have access to, what is the data quality, and will it increase visibility to sensitive or misclassified data?
  • Will there be responsibilities that AI displaces from L&D?
  • What is required to build AI literacy in your organization?

See, I told you there’d be lots of questions. So, follow along and we’ll see if we can answer them together.

What’s In It For You

Hopefully, a lot. We’ll let you judge for yourself the suitability of the emerging technologies for your use cases and organization. We’ll make it easy to stay up to date with all the advancements. We’ll help you become a better partner for your business as your colleagues ask similar questions internally. And hopefully we can even have some fun along the way!

How to Get Involved

Send us ideas or questions. We’d love to hear what’s on your mind. Email tblake@degreed.com.

Volunteer to take our experiments for a test run. We’ll pick a few partners to help test and provide feedback on each prototype. Let us know which topic has piqued your interest by emailing tblake@degreed.com

Follow us on LinkedIn to catch our next posts, which will be coming soon.

Thanks, all! We’re excited to see what we learn together.

Watch an on-demand session with AI Guru, Noelle Russell, discussing the future of AI.

The post Degreed Experiments with Emerging Technologies appeared first on Degreed.

]]>
https://degreed.com/experience/blog/degreed-experiments-emerging-technologies/feed/ 0