(Kathy Pham – Workday AI Masterclass) At the Workday Innovation Summit last spring, I was able to get a much deeper view of Workday’s AI strategy – and responsible AI framework. But not everything got resolved. I’ve had good discussions with Workday about what I believe is often missing from so-called “responsible AI”: customer education. Talking with Workday’s Kathy Pham, I got the feeling something was in the works – now I know. In the lead up to this year’s Workday Rising, Workday did something about that – by releasing a Workday AI Masterclass – Become an enterprise AI expert. This class, which has four sections, serves as something of an overview (no registration currently required – there is also a YouTube playlist). Workday has also issued a more in-depth AI and ML for finance program, which includes a certification (sign up required). As of this writing, I’ve worked through most of the videos – though new ones have been added.
Customers want to hear about the AI pros and cons This time around, I’ll review the Workday AI Masterclass – which includes three course modules: “Getting Started with AI,” the “Business Value of AI,” and “Responsible AI Governance.” All of the speakers in this overview course are Workday employees (the certification course has a number of outside experts as well). It’s about three hours of videos currently, but some are information-packed, so I’d allow time for notetaking and rewatching. My agenda this fall? Surface meaningful details on enterprise vendors’ AI architectures. Yes, most vendors, including Workday, are taking customer data privacy very seriously, but as I’ve written:
The summary provided highlights a significant gap in the public discourse surrounding artificial intelligence (AI). It argues that there is a lack of transparency and open discussion about the benefits and drawbacks of AI, particularly when presented to customers. This lack of information leaves customers feeling uninformed and potentially vulnerable to AI-related risks.
Enterprise AI needs to be more than just a tool; it’s a strategic decision-sourcing, data-driven, and process-oriented approach. **Here’s a breakdown of the key points:**
* **Enterprise AI vs. Consumer AI:** The summary highlights a crucial distinction between enterprise AI and consumer AI.
Enterprise AI, however, is different. It’s not just about analyzing data; it’s about understanding the context and meaning behind that data. It’s about using AI to drive real-world impact and solve complex business problems.
They can also be used for creative writing, translating languages, and even composing music. This ability to generalize across domains is a key factor in their success. It allows them to be used for a wide range of tasks, making them incredibly versatile and adaptable.
For enterprises, output precision is a critical issue: When you ask things like, ‘What was my last paycheck,’ or ‘What’s our company’s PTO policy,’ we need to make sure that those answers are consistent and correct, regardless of whether you’re in the United States or somewhere in Europe, whether you’ve been at Workday or your organization for a month, or whether you’ve been there for five years. How Workday uses LLMs – more selectively than you might think I could nitpick on LLMs’ ability to generalize – LLMs still struggle to generalize when they are pushed far out of the training data. But when your model is trained on the whole Internet, your ability to generalize is pretty darn good, e.g. “give me Workday’s AI value proposition in the form of a Shakespearean sonnet.”
This is a crucial point because it highlights the difference between “AI overreach” and responsible AI use. “AI overreach” refers to the tendency to overestimate the capabilities of AI systems and their ability to replace human judgment and decision-making. This can lead to unrealistic expectations and potentially harmful consequences.
Therefore, we rely on stochastic gradient descent (SGD) to efficiently explore this vast parameter space. Stochastic gradient descent is a powerful technique that iteratively updates the parameters based on the gradient of the loss function. It works by calculating the gradient at a single point in the parameter space, and then updating the parameters in the direction of the negative gradient. This process is repeated for multiple iterations, gradually converging towards a minimum of the loss function.
Let’s explore how AI can be used to automate tasks in accounting and finance. **Good Enough**
AI can be used to automate tasks that are “good enough” for the purpose of accounting and finance. This means that AI can handle tasks that are not perfect, but still meet the basic requirements of accuracy and completeness.
* Vendors should clearly define their AI technology’s purpose and intended use. * Vendors should be transparent about their AI’s limitations and potential biases. * Vendors should be proactive in addressing customer concerns and providing clear explanations. * Vendors should be prepared to adapt their AI technology to meet evolving customer needs.
This is a call to action for all stakeholders involved in enterprise AI projects. We need to move beyond the hype and embrace a more grounded, pragmatic approach. We need to focus on the real-world applications of AI, not just the theoretical possibilities. The summary provided is a call to action for all stakeholders involved in enterprise AI projects. It emphasizes the importance of a grounded, pragmatic approach to enterprise AI adoption.
* We take a hybrid approach. * We use partners like GCP and AWS. * We also build our own Large Language Models. **Expanded Text:**
Our approach to developing and delivering cutting-edge AI solutions is a unique blend of innovation and strategic partnerships.
Smaller models are ideal for tuning to a customer’s private data and custom syntax. In AI speak, this is sometimes called parameter-efficient fine tuning, or PEFT: A good example is something like job descriptions. A model that we built internally is able to capture the nuance and tone of how you write your job descriptions, the syntax and structure of your job description, and it’s also able to stay up to date as your job descriptions tend to update. That means we’re also able to capture proprietary technology or terminology that you don’t want in the public domain. Those models are all trained specifically on your information or your private information, and deliver generative results specific to you.
Smaller internal models are advantageous in several ways. They are more efficient in terms of computational resources and memory usage. They are also more adaptable to changes in data distribution, making them better suited for handling real-world scenarios. Smaller internal models can be retrained frequently, minimizing issues like model drift. **Detailed Text:**
The efficiency of smaller internal models is a significant advantage, particularly in resource-constrained environments.
This approach, often referred to as “hybrid models,” offers a unique blend of strengths and weaknesses, creating a more robust and versatile system. Hybrid models leverage the strengths of both LLMs and smaller models, such as the ability of LLMs to handle complex language tasks and the efficiency of smaller models in processing large amounts of data. This synergy allows for a more efficient and effective system, capable of tackling a wider range of tasks. For instance, consider a chatbot application.
* The benefits of using smaller, high-efficiency AI models are faster feature development, quicker shipping, and a foundation for building more features.
This is a summary of the article. **Module 1: AI for HR**
* **AI in HR:** AI is transforming HR functions, automating tasks, and improving decision-making. * **Risk Levels:** AI projects in HR can pose significant risks, including bias, privacy violations, and job displacement.
But if someone like me, who has an immune reaction to vendor propaganda, can enjoy this course, I suspect many others would too. However, in future editions, Workday should consider doing a separate history of AI and a history of innovation and AI at Workday. Combining the two was awkward, and resulted in a somewhat disjointed AI history that was picked up again later in the course. Most business users and execs don’t need to become AI experts. To take it up a notch, they should combine an overview course like this with a deeper dive into their industry or domain. So next up, I’ll review Workday’s AI and ML for finance program.
Note: I intentionally did not contact Workday about this course; I wanted to review it completely independently. But that also means there could be additional changes; Workday seems to be tweaking things like module access leading up to Rising. After this week in Vegas, I’ll update this post with any new links.