Lattice might have been more right than wrong

· 6 min read

Recently, Sarah Franklin, the CEO of Lattice made a bold announcement about the future of work. Lattice predicted that humans will soon sit elbow to digital elbow with AI teammates. AI teammates are not a new idea, Asana announced something similar not too long ago. What was different was that Lattice announced it would be the first company to treat these new digital workers just like their human counterparts – they would allow you to onboard them, give them goals and assign them managers.

Not coincidentally, Lattice recently expanded its HR tool suite to include an HRIS module. HRIS modules are used as a store of record for employee data. It’s likely that “HRIS platform that includes digital workers” seemed like an attention-grabbing way to launch. However, within minutes of the announcement, the internet collectively eye-rolled and eviscerated Sarah’s post.

What is too bad about the outcome is that a few poorly constructed narratives overshadowed some really interesting ideas. More importantly, the article was spot on about some realities of how AI will be adopted by most organizations.

At Winslow, we’ve talked to nearly 100 HR leaders in the last two months about their planned adoption of AI tech. We wanted to share what we have learned, how we see it playing out, and where we think Lattice got it wrong and in many cases got it very right!

But first, a mental model..

Interns with Photographic Memories

We think the right mental model for AI today (not including generative AI like ChatGPT) is like an intern who has a photographic memory and can read 1000 words per second. This kind of intern consumes any documents you throw at them and finds answers in those documents with instant recall. They are even smart enough to understand some context (”pretend you are an HR administrator answering an employee question”) and adjust their recall a bit to make the output more efficient to work with.

Thus, the best place to get value from an AI right now is anywhere there is a lot of information retrieval from unstructured documents. Three obvious areas that come to mind: legal, customer support and HR.

Going Vertical (Lattice – 1, Internet – 0)

AI technology will likely first be adopted vertically in the form of co-pilots. As Sarah pointed out, vertical solutions are popping up already. Devin for software dev, Harvey for Legal, Einstein for Support, Piper in Sales, and now Winslow for HR. This maps to Lattice’s view of digital workers as being logically located in an org chart. Just like human workers, digital workers have specializations. We think Lattice got this important part of the adoption cycle right without saying it overtly. Going vertical is natural for a few reasons:

  • Getting a clean and relevant data set for the vertical (e.g. all your customer contracts, all your HR docs) is critical to getting value from the AI. There is a big difference between ChatGPT which needs to be fairly smart about a lot of things and a corporate AI which needs to be comprehensively smart about a few things.
  • Just like with SaaS software, understanding workflow and process around data is as important as having the right data itself.

As an example, we invest a lot of time at Winslow building plugins for Gmail, Outlook, Slack and Teams. We know that HR professionals get employee questions in all those places and staying “in workflow” matters when you’re trying to save the HR professional time. Workflows are different for every department which will lead to different tooling for each vertical AI.

Onboarding (Lattice – 2, Internet – 0)

The flip side of being “in the workflow” is having access to the right information. If you’re the legal team, you need all contracts in your AI co-pilot. If you are the dev team you need your code base. With Winslow, we spent a lot of time building connectors to all the places we know HR documents live so HR teams can quickly get their documents loaded and then spend zero time worrying about keeping them updated.

As AI’s become more sophisticated and can help team members work through multi-step processes, having access to the right procedures and processes matters just like having the right data.

We think this is another idea Lattice got right, albeit using different words. Lattice suggested an AI needs to be “onboarded”. If you replace “onboarded” with “supplied with all the relevant documents and procedures” then you can see the value of thinking this way. Of course, traditional human onboarding steps like filling out tax forms and picking benefits won’t matter for digital workers. This seems to be where the Internet ran away with the conversation a bit. Ironically the mundane things like getting paid and paying taxes engendered the most debate with how digital workers compare.

Digital Workers vs A Digital Worker (Lattice – 2, Internet – 1)

One of the things Lattice was a miss on, in our view, was putting a named digital worker on the org chart next to multiple human workers with the same job. Despite the slightly tone-def miss of anthropomorphizing technology to a group of people whose job it is to care about the human experience (e.g. HR teams), the implication here is you might have multiple digital workers performing the same tasks like you do with humans. This seems to miss the whole concept of technology scale. It reminded us of one of the most poignant scenes in Her [spoiler alert for those who have not watched it] where Theodore asks Samantha if she’s dating anyone else and she replies she is dating 8316 people and has fallen in love with 641 of them. We need our AI’s to be doing many things at once.

OKRs are OK (Lattice – 2, Internet – 1)

Another claim that Lattice made is that AI’s would have goals. While it’ll be critical to monitor the engagement of AI co-pilots, how much they are being used, how much they are reducing workload in other areas or increasing efficiency of the human team, it’ll be a while time until AI’s are capable of making decisions about how they will achieve goals and, more importantly, even longer until they are able to influence humans to change the work they are doing to server those opinions.

AI’s Will Need Need Managers and Owners (Lattice – 3, Internet – 1)

There is an old adage: Don’t hire an intern if you don’t have someone to pay them a lot of attention. Junior employees can be a lot of work. They need coaching, they need appropriate materials to get them productive, they need to be able to produce poor results without repercussions and learn from this, they need constant feedback to be able to improve. It’s a lot of work but the best interns become true assets to the business. AI teammates will have a similar dynamic. Someone on each vertical team will need to monitor and manage their success and effectiveness just like you would if you implemented a new piece of performance management software or were managing a large CRM system. Sometimes the underlying policies and procedures that drive the AI will need to be updated to help the AI produce more consistent answers and outcomes.

Teammates vs Technology (Lattice – 3, Internet – 2)

We’ve spent a lot of time thinking about what the word teammate really means in the context of the progression of AI from where it is today to where it can be in the future. The phrase “AI teammate” is popping up in a lot of places. If you think of all the technology you use today, your HRIS system for example, do you view it as a teammate? Even if it had a conversational interface, would you view it as a teammate? Likely not. We think Lattice jumped the gun here a bit.

We wouldn’t dare to try and peel the onion of the human condition in a blog post, but there are a few things that need to emerge for us to start believing we’re moving away from co-pilots and getting closer to teammates:

  • Process vs policy – right now AI is excellent and finding answers in unstructured documents. Over time it will need to become excellent at understanding procedures. Procedures require memory (where are we in the process), understanding of their environment, coordination between mu* ltiple parties and self-evaluation. Each one of these items is complex and the technology will take time to develop.
  • Context – humans quickly learn their environment. What technology stack do you use? When you submit a performance review in one system, what other steps in what other systems are necessary to be performed? When we talk to HR professionals, the spaghetti of their existing technologies is hard enough for them to untangle to get things done let alone explain it to an AI.
  • Documentation – we take for granted how much knowledge in a company, even a big one, is tribal. New hires are trained by operations staff how to perform procedures. Rarely is everything written down and still accurate. For AI’s to become teammates they either will need lots more documentation or to effectively sit in on training sessions to start to learn the tribal knowledge of the team. This seems somewhat feasible in the future given the new omni-modalities of GPT 4o.
  • Agency – most AI technology today is a response to something. We query it for an answer, we ask it to pour over data. But teammates have agency. They have goals. They reach out to you when they need things. Even something as simple as ChatGPT watching the questions you’re asking and suggesting something interesting but adjacent to you (like a friend or teammate might suggest an article to read based on what you’re talking about) is still missing. Shifting the technology from being reactionary to proactive will be fascinating.

Conclusion

Nothing great happens without being bold. We loved seeing Lattice take a leap into the unknown and push the conversation forward. By are count, Lattice actually came out ahead 3-2 on interesting versus cringe ideas. As a very knowledgeable friend in the industry quipped, “the only real disappointment was that Lattice pulled back too far too fast as a reaction to the Internet.” We couldn’t agree more. In the end, the future we’re all collectively pulling for is one where humans have to work less on the mundane tasks and get to work more on the strategic. AI will be the unlock that gets us all there. Achieving this is our goal at Winslow and why we built the first AI-powered HR co-pilot as a starting point that every HR team can adopt and be successful with.

[addtoany]