Season 2 · Episode 4
MLOps, AIOps, and Data Startups with Jocelyn Goldfein
Dealing with data hyperabundance, solving economic problems for businesses and changing lives for the better. Tune-in to Managing Director at Zetta Venture Partners, Jocelyn Goldfein as she and Sam have a discussion around engineering leadership, organizational graph structures, and productization of AI.
Episode Guest

Episode Transcript
Sam:
Hi, I'm Sam Ramji. And this is open-source data. I'm here today with Jocelyn Goldfein. Jocelyn is a managing director at Zetta venture partners.
Welcome Jocelyn, we're delighted to have you here today.
Jocelyn:
Oh, it's so great to be here. Thanks for inviting me, Sam.
Sam:
I got to meet you to my great fortune, not too long ago. And we had a conversation that I felt like could have gone on for many, many hours about the state and history of data and computation. So I'm really excited to ask you this question, which is what does open source data mean to you?
Jocelyn:
Yeah, it's interesting because I'm used to thinking about open source in the context of software, of code, of compiling my own Linux drivers was my first exposure to open source. But you know, these days definitely as the way for, for products to get sort of wide distribution to build community.
And data is interesting because. We don't have as many examples of freely and openly available data sets. But I think of Kaggle, which was actually a fund of fund portfolio company of my firm. They're probably the leaders there in terms of making data widely available to communities to build around and build on top of. But I'll say, you know, one aspect of openings. Code, which I think is sometimes overlooked - and which I think is going to be true of that kind of open access to data as well - is not just that, like, we can try a product for free or get code contributions from the community. I think it's a way that people can hone their craft as software developers by looking at this Corpus of beautifully written code, and get smarter.
When I graduated from college in the nineties, you study CS, you're new. You don't know what beautiful code looks like. And the only way to study the Masters, the great, like you've just gotta go work for a company.
Now, if you want to study the Masters, like go on GitHub, it's all there. You can read it. You can see it go from back and develop taste and design.
And I think that sort of open access to data, it's not just that people can sort of more broadly innovate and build models and new applications on top of freely available data. But I think people can get smarter about what good looks like. And I think so much of the future of machine learning and AI and analytics and BI is all about, not just volume of data, but like the right data, and the data that's clean and fit for purpose. And so being able to look at really good data sets, I like, I think there's something there.
So anyway, thanks for asking this question because it, it may be, think around a corner I hadn't thought around before.
Sam:
Yeah. And that's kind of the intent behind it, right? People are used to thinking about open source and they're used to thinking about open data, but when we combine the three words into sort of a one phrase, right, there's an opportunity to create new meaning.
And I love what you said there, it evoked a sort of mastery. Right. As we create open source for all the things and apply it to the domain of data, there's a mastery opportunity for people who just want to come up the curve. There's also generosity. And you mentioned Kaggle, and of course they're incredibly openhearted and generous leader Anthony Goldbloom, one of the sweetest folks I've had the privilege to meet. I was at Google when we acquired Kaggle and just the whole vision of hundreds of thousands of open data sets for millions of budding data scientists, is incredibly inspiring.
So you have a really super neat career. And I would love to have you tell a little bit of the story, because I want to use that as context to ask a few more questions. So can you talk a little bit about your journey from engineering to venture capital, and a little bit of how you came to focus on these seed stage AI and B2B companies when you did? Cause I think you were ahead of the curve, right? As 2021 as become obvious, but you were doing this before. It was obvious and I think that's really interesting.
Jocelyn:
I graduated from college in '97. So just a little bit ahead of the .com boom years. And so it was not yet the sexy thing to do to go work for startups. I mean the sexy employer in 1997 at Stanford University was Microsoft. I decided to go to a startup nonetheless and fell in love with entrepreneurship and worked at a couple different early stage companies, including one that IPOed during the boom.
And then I had the really awful timing to start a company myself. We co-founded a company in the year 2000. We raised money 30 seconds before the funding windows slammed shut. And then the next startup I went to work for was VMware when it was a couple hundred people. And, um, that was such an amazing experience. It was career making, you know, this advice about like, don't ask what seat, just get on for a rocket ship. Like that is such good career advice. And VMware kept doubling and, and my career kept doubling with it. I stayed seven years, I grew up with it. I rose to the exec level in engineering. And the entirety of my career, all those startups VMware were always B2B. We were always solving the problems of enterprise.
I don't know why, but I love solving the problems associated with work. It feels fundamental. Changing economics for people for businesses is how you've changed lives. And so that always appealed to me more than consumer. Um, but I did eventually decide I've got to give consumer a shot and there were also infrastructure and scale challenges that really appealed to me.
So in 2010 when I decided to leave VMware. And the next company I joined was Facebook and Facebook was an education. And in four years it was like it's where I cut my teeth on machine learning. Uh, it's where I cut my teeth on product led growth. It taught me so much. I'm eternally grateful. And I think of it as a super, super engineering culture, and super well run sort of engineering function there. And I think that they invented a lot of their own, but also had the humility to learn from the best as well.
Those years went by in a heartbeat. And one of the many things I'm grateful for to Facebook, is that it was so prominent in the zeitgeists I guess, of Silicon valley. In those years, I was there from 2010 to 2014, but it really opened the door for me to become an angel investor. And I started angel investing.
And when I left Facebook, I didn't quite know what I wanted to do. I knew that I had been in engineering leadership roles for a long time, and I felt that my learning curve was flattening. I really thought, okay, I've got to do something completely new and different, but you know, what is that? And so, well, maybe I'm just going to hang out for a year or two here, investing in and advising startups. And maybe what I should do will come to me. And of course, with a plan like that, you're obviously hoping that you get that, that anvil falls on your head and you figure out what kind of company you'd like to start, or you meet your co-founders.
But instead it was more like an up on me, very, very slowly. Hey, I'm actually really enjoying, helping found, and seeing it all and the flow of ideas. And it became more and more rewarding. I realized when I wrote bigger checks and spent more time. And in one case, a startup invited me to be the independent on the board. And then I felt like actually, I'm really material to this company. I'm making a real difference as opposed to just like writing a little check and giving an out point now, and then.
And so I realized that kind of institutional investing more so than angel investing what I was gravitating towards and finding fulfillment. And so how I found a job in venture is, is a different and longer story, but that's why, I pursued it and very happily it worked out for me.
And I've landed at a firm that is really focused on technology. As Zetta stands for zettabyte we are focused exclusively on AI and big data startups. And B2B, like always, made sense to me. I think Facebook only, even more convinced me that, like the great thing about being at Facebook is you can have any wild and crazy idea and you can test it and find out if you're right and wrong. And so I quickly get humility that you are not the consumer. You are not the user. Your ideas are not necessarily the ideas that get uptake, even if you put it in front of a billion people.
And so I think I have the humility to know that, like I don't have, you can't figure out what consumer ideas are good without traction. You just like pretrial. I don't think anybody knows, maybe somebody does. I don't know. So it was much more to me, appealing to think about B2B where even pre traction, you can look at something, if you have the technical background, if you can sort of understand what they're building, you can have a fair idea of whether it's going to work. And then if you have the experience of being a customer or a business where you can talk to people in that line of business, you can figure out if it has value.
And if the technology is going to work and it's going to have value to businesses, then I think you're in a pretty good position to write a pre traction check, and to have confidence.
Sam:
What led you to focus? So specifically on data and AI in what I think was 2015, when you made your decision to join Zetta? That's fascinating to me, I'm suspecting that there some influence on the kinds of problems that you are solving at Facebook. And of course, now I'm working for a company that builds commercial infrastructure on top of the Facebook project, Cassandra. Uh, so I suspect that there's some level of affinity there. But that was an early time to make a bet that was kind of, so focused.
Jocelyn:
Yeah. First of all, I think the real credit goes to my partner, Mark Gorenberg, who founded that in 2013 with that focus. And I think he’s seen these cycles before he saw the rise of, you know, there was a time in venture in the eighties when he got going. Where everybody was investing in systems companies. And he worked at the first firm to focus exclusively on software because they said they called it early. They said, "You know what? Hardware is becoming a commodity. The value is coming from the applications running on the hardware." They were right. They saw the same thing happen with the rise of cloud.
And so even very early on, he was able to see the rise of data and analytics, the importance of that. But this was another generational shift in computing where the value was going to come from combining the whole stack, not just from the piece underneath and that when there's a new wave like that, it opens up completely new categories that simply didn't exist before. And it also gives the newcomers a chance to disrupt incumbents who run and create new category kicks.
So I think Mark was truly the appreciated one in 2013, but when he told that story to me in 2015, it immediately resonated because, you know, in 2010, when I joined Facebook, it was one of the few really best places on the planet to do machine learning. And they were far along.
So much of it was DIY and homegrown like it, you know, that old Carl Sagan quote about. You know, if you want to really make an apple pie from scratch first, you have to invent the universe. Well, like if you want to do a machine learning project in production in 2010, like, you know, first you got to grow an apple tree. And, uh, and it was a lot like that with data infrastructure. And that's why Cassandra exists is why all these projects, um, came from that time.
But that was already in place. Cassandra existed by 2010 when I, when I joined Facebook. And so we were starting to be able to use ML for real production use cases. And I think the very first was search and ad targeting. Which are both really good candidates for machine learning. Because even if accuracy is not perfect, if it's better than your non machine learning alternative, it's just pure upside. Like if it's, if it's more right than it's great. And if it's, if it's wrong, sometimes , oh well, it can still beat the alternative.
And so that's why I think you first saw the first production use cases of machine learning where those kinds of upside, only, payoff use cases like Netflix movie recommendations, I think are really the first popular use cases of ML and Amazon shopping recommendations soon thereafter. And then I think Facebook was right in there.
Google was actually very late to bring machine learning to search because they had built search in the previous e-learning era. So Bing had it before Google did. On the search side, Google used machine learning philosophies besides search.
So in 2010, my first big project at Facebook was bringing machine learning to the third anchor tenant of Facebook after, after ads and search, which was the newsfeed we ran to the newsfeed.
Or if you were using Facebook in 2011, uh, you may remember this was a tremendously unpopular move, almost as unpopular as launching the newsfeed in the first place because people wanted it to be chronological. But you know, this goes back to one of those things where I said, you know, you're not the consumer.
We ran test after test, after test, after test, we may believe and feel that we want our feeds to be chronological, but we engage with them far more in there because we give a disproportionate amount of our attention to the first 10 items in the feed, and an even more disproportionate amount of attention to them one item. And so chronological basically puts stories of random quality in those slots. Right? It gives you a fighting chance that that they're worth engaging with. So that's why we did it, um, as unpopular as it was and everybody hear this and says, "Oh, yes, I'm sure that's true of everybody else. But for me, I engage more with my feed is chronological." And I'm here to tell you you're wrong - we ran the tests.
Sam:
As one of the interesting things that you pointed out about those two categories, ads and search - they also are embarrassingly full of data. Like the volumes, they are staggering. So the ability to create fairly good inference models, you know, off of that quantity, is pretty good even for.
Fairly dumb algorithms. So your opportunity we're really, really good, right?
Jocelyn:
Like if you put up an ad, you get the immediate feedback loop, but did you get a click or you do not. You put up a search result, you get a long click or you don't. Right. Like until you get a short click, which tells you, you look like you might be right, but you get a long click that tells you you were right cause someone got their hands in question. And come back.
So, those are both two applications of machine learning, where you find out instantaneously, if you were right or wrong, and that's the condition in which a model can accelerate its improvement really fast. You can just compound the accuracy very, very quickly.
And newsfeed ended up being a product like that as well. Where we got this immediate gratification, we knew if we put something good in the top slot, we knew if people clicked or liked.
So that was just incredibly rewarding. And so even then, as you say, there weren't that many collective sets of data of that volume, impressionist quality, like Facebook was sitting on a gold mine.
It's never easy to hire data scientists and ML engineers, but it was so much easier for Facebook to do that hiring because, you know, you would talk to candidates about our petabytes of data that we were sitting on, and that the straws would drop and they'd beg you to come because they can't do their work without those. Um, or, or they can do so much more work. They can fly so high with those kinds of datasets. So It was easy in 2015 to believe that ML could have overtaken, you know, every industry, the way it had become fundamental to Facebook in the four years I was there. That was just sort of like a no-brainer to me at that time.
Sam:
The interesting reverse of the technology side is the people. And you grew through many, many jumps in leadership and organizational models within both the org that you had to lead, and refactor, and rearchitect and the orgs that you were in. Both at VMware and then Facebook, right. Almost foreshadowing.
Many of the issues that big enterprises have now, which is like, how do I organize around data? How do I hire data scientists?
I'd love for you to share a few of the things that stand out for you that you found to be true about organizational design and kind of how you, how you apply those insights today.
Jocelyn:
I think about organizations as a graph structure, like people are nodes, the sort of organizational lines are our edges, right? Like I have a strong edge with my teammates, have a very strong edge with my boss. I have a very strong edge with someone who works for me. Maybe it's two hops to someone in the next team over, maybe it's five hops to someone in a department for our way.
So it's a graph. Um, and the interesting thing about graphs is that as they become larger, You know, as the number of nodes expands, linearly the number of edges expands exponentially, that has really interesting connotations for communication for scale. You know, I always took this a little bit, like a systems view of, uh, people.
And I think, you know, just like distributed systems, like when it's small, you don't have to think too hard about it. It just works. But as it gets bigger and bigger, actually the hard problems, the hard architectural problems all end up being, how do we re architect for scale? And you don't want to be too precious about this when you're little and you don't have that.
Something that happened at both VMware and Facebook as they were on these high growth curves, is that there were relatively frequent reorgs, maybe annual even. And the thing is you hear too frequently and it's easy for employees to become very cynical. It's like, "oh, we're just shuffling desk chairs." And every time you'd be a good sort of like, oh, it's the new organizational structure is correct, then the old one must have been wrong, right. "So we've been making the wrong decision every time. Why should I think this is the right one?"
But I actually think that, first of all, as you grow in scale, you're always going to have to re-architect for scale. But second of all, a lot of times work structure optimize for different things. I think the classic example, anybody who's working in a big company is kind of familiar with it, then at least in a product part of the organization, is sort of the matrix - are we organized by function? Am I in the part of the org with all the other engineers? And you know, the PMs are many hops away from me. You know. Are the PMs, my third cousins instead of my siblings? Or am I in an org that's organized by product? And so the PM and designer and engineer and QA for product are all siblings, and it's other engineers that are my third cousins.
These optimize for different things. And I think what happens is that over time the org chart gives you one set of edges, which is the organizational tie. Other edges come from sitting near each other, from having friendly working relationships, from cross-functional projects you did together. And so as long as the graph is strong and there's lots of strong edges between all the nodes, you're going to be able to collaborate and get stuffed on whatever the bureaucracy or red tape may say.
But when those ties get weak, then we have trouble marshaling collaboration, and we need functional collaboration. And we need product collaboration. And so I sometimes think that the only happy medium is actually to sort of oscillate between the matrix, to oscillate between product organization. And then as, as the memory of those old functional ties becomes weak and the product pies are very strong and then you, like you oscillate back and you rework the other way. And so sometimes it seems like you're just un-doing the last reorg and maybe that's okay.
Sam:
That's really, really deep wisdom there looking at the organization as a graph or a network rather than a strict hierarchy. And then looking at how curves grow and change. The need to re-architect and refactor the same way we do code doesn't mean your last version of the code was wrong. It's just that you got 10 X more requests per second now, and obviously that's different and you need a new algorithm or, or a new architect.
You bring a lot of that to seed stage startups where you've got a two person co-founder team or a solo founder, bringing people in and starting to scale up. How do you marry that to what kind of infrastructure are you looking for? Because as you're investing, you're looking at, is this the right market space? Is it too crowded? Is there something really insightful in the technology? And then , can it really be a company? So I'd like to get some of your light, Sean. Org as it meets technology for that seed stage, as startups are growing up, what do you see there? What do you advise?
Jocelyn:
Well, I think kind of the same thing about the people in organizational infrastructure, as I do about the kind of the backend distributed system, architecture and infrastructure, which is at the two guys and a dog stage, don't sweat it. I think it's essential at the scale and the growth stage. It doesn't really matter to get from two people to 10 people. That's not the thing that's going to make it. And that's not the thing that's going to break.
So what do I look for when it's that early? Okay. I kind of hinted about this earlier, but I think for AI projects, it's true for every venture that you're making sort of wild proclamations that don't sound achievable, or possible, or feasible - it's hard to attract capital, right? I have to believe it's possible.
When we're working in deep tech, when we're working in something like AI, you know, that's not a given the way it might be in SAS at this point, if someone comes along with a new SAS idea, it's a given that it's technically feasible, right? People may or may not want it, but it ends up being a question about, do you have a good market insight? Not, is it possible to build that? In AI, it's not always possible to build that right. Involves sort of really figuring out what problem you're trying to solve.
I think there was a, you know, there were a whole bunch of AI meeting assistant companies about five years ago because finding matching holes in a schedule, looked like a good problem for machine learning. And by the way it is, but somehow none of those ever took off. And I think that everybody who had a human admin five years ago still does. And I think that's because what human admins are really doing is not finding the matching hole in the schedule. They're actually navigating a pretty sort of complex, and mostly subtextual dynamic around whose convenience matters more, and who's got a priority for who, and ultimately who's got the, is there some symmetry and asymmetry in the power relationship here?
And so, and so that's not a good problem for machine learning at all. So like figuring out whether you 're dealing with a problem that is that, that, that needs ML because, you know, if you can solve it with a line of code, then, then my God, people with Pearl script are gonna run rings around you. If you're sitting there trying to build a model, it also has to be solvable by ML and so that's really step one.
I'm happy to. Pre traction. I'm happy to invest in pre-revenue and pre-customer. I can even sometimes invest pre product if I, because I'm technical. If I understand the problem well enough to know that can be built.
But what I like to see, usually is post data that you have enough data that you have enough of the model that you can run. Some experiments. You can show this as a trackable problem for ML that ML can solve better than rule-based methods. And if you've got that, then, uh, then you've got enough and we can worry about getting that from a scale of experiment to the lab, to Facebook newsfeed scale. That's an execution problem. I can help you.
Sam:
Yeah. And, and how tractability is an interesting and tough thing to measure also in ML, because some things are possible, but not productive. Yeah. Right because it's so bound to the particular context. Often I think that, you know, software development is like writing a recipe.
Whereas AI development is more like training a puppy. Uh, you know, it's easy for the puppy to go wrong. And can my puppy really live at your house? How do you start productizing these things?
Personally, I've seen failures. Like we had an acquisition that we did at Appigee in 2014, of a Hadoop-based AI. Um, we were not able to productize it, right.
It had some early, really nice signs in being able to improve customer retention for, you know, some of our healthcare customers, but ultimately the models just tended to skew too much and you need too much expertise in the customer, versus the value of what they thought they were getting. So those are tough problems.
Have those gotten much easier in the last five, six years? Is it more obvious which ML things not only are possible, but can be productized, or is that still a bit of an art?
Jocelyn:
I think it's still a bit of an art. I do think that the one thing that we try to deeply understand is the degree to which. Or where we try to ask a lot of questions around and I won't say we're infallible at predicting the result, but then we really try to test honors on, is the degree to which data from one customer is going to be relevant to another customer.
Because I think that comes back to if customers are similar enough to one another, that I can run one model train on the pool data that gives a better result to both of them than if I just made a model for each of them. Then this is something that can scale. But if that reduces my accuracy or fall below a threshold of accuracy, when I pull it up.
Then, first of all, I'm never going to have an out of the box experience for customers, basically, always going to have to build a custom model for every customer. P.S. This looks a little bit more like a services business than a product business, which VCs hate, but it implies exactly what you're saying, which is, the model doesn't generalize that you don't have something that can, that can work at scale.
Having two data sets gives you higher accuracy rather than, you know, one data set or, you know, two separate data. That's a pretty big advantage in an early startup.
Sam:
You've been modeling, oh, a lot of the space of ML and AI, just through your day-to-day work. You've seen a bunch of trends, right? There's a B2B focus that you have. So you tend to look at the patterns and then what's missing. But there's also a lot of excitement. In terms of these missing pieces or these transformative areas of tech right now that you focus on, what are you most excited about? What makes you jump out of bed in the morning?
Jocelyn:
Two things. So the first thing is just the tools and platforms that machine learning engineers and data scientists are going to use. I think that we are now really well into the period where a decade ago was sort of the Dawn of the commercialization, productization of AI. Enterprise customers were tinkering with it. Maybe only the things had been in production.
Now we're sort of five, 10 years more along. We have pharma with order of magnitude more town because ML has been hot this whole time. And so every 80% of new college grads studying computer science for the last 10 years have been studying ML. So now all of a sudden we have like a step function improvement. And people with five years of experience or two years of experience doing it. So we've exponentially more people there.
I think now we have a lot more people. Who've been through a couple different products and they're starting to be opinionated about what they want and their tooling and their platform. We've got a lot of case law precedent about how to deploy AI in production. Whereas before it was mostly, we were taking around in the lab and try and sort of chewing gum and duct tape, what to do to get it.
I think we're at a moment where instead of, I'll take what my cloud provider gives me and like it. I'll push a model into production the way Amazon tells me to. We're getting to a world where there's true appetite. There's a large market for best of breed tools. So I think that's a place where innovation can flourish because we love the lead tools for ourselves. It's so much easier to build a tool where you are the customer. It's easy to build tools for myself.
So I think that we're just in a real flowering moment with tons of time. And I mean if anything, there's an overabundance? And I think we've got consolidation in the future of, of new tools, tackling different parts of the data science and machine learning, workflow, and then deployment to production. And what is the equivalent of dev ops? Rather than code. And I think it's not as direct a metaphor as everybody wants it to be because models aren't code, and they have different needs. Like we ship software, like we talk about bit rot, but software doesn't really stop working the day you ship it. It's just that over time usage changes or maybe integration with Sprite, cause an API change that takes months and years, models literally start decaying. As soon as the day you put them live because the data you train them on is becoming less relevant.
So I think that's an incredibly fruitful area for innovation. And I think lots of people are innovating and that makes it exciting.
The other thing he's sort of closely related to is almost coincidentally a site. So that space is frequently now referred to as ML ops. The other place I love is AI ops, which is not a synonym for ML ops. It means the practice of using AI to automate. DevOps.
So going back to my VMware roots, looking at cloud infrastructure, things like virtualization and cloud have, let us be an order of magnitude, more ambitious in the way we're architecting our backends. They can be back, they can be architected for scale and microservices. You name it, like we're in the middle of all these incredible secular shifts that all are sort of an order of magnitude increase in complexity and surface area of what we're trying to manage. And we need better management tools.
And I think the jury's out on what those are going to look like. I think that for the most part, there's so much complexity that we need the help of machines to manage the machines. So I think this is a really great problem for AI and so super psyched about systems management was a category of 20 years ago, and I think it's due for a comeback, but we're going to call it AI ops this time around and it's, it'll be smarter.
So I'm super excited for that. And I think closely related AI for cybersecurity, another big asymmetrical problem where the attack surface is immense because we have all this great scale from new endpoints from IOT, from the cloud, you name it. And attackers only need to find one entry point, but defenders have to defend it all. So it's again causing asymmetrical warfare problems, where I think automation and AI are our big hope. So all of these are things where we just get so much more leverage on the human. If we can bring technology to bear, to help us out.
Sam:
Yeah. I love that focus because the more that we do artificial intelligence and that's what it was called when I got my degree in it, AI and neuroscience, which we had, which we called cognitive science back in 1994. I think what's magical is we're starting to piece apart all the things that we thought made us special to be human, and realized that those aren't the special things. Oh, you got a pattern recognizer. Okay, fine. So as we decompose those elements. We will slowly reveal what actually makes it magical to be human. That we feel right, that we relate, that we can innovate, that we can create, that we can share love. Right.
So this focus on augmenting the intelligence of the person as opposed to creating some new generalized artificial intelligence to replace us, I think is very heartening. Right? So a more realistic view, I think. And also much more optimistic.
Jocelyn:
Yeah. I actually love that. You know, again, I call myself a techno optimist. I mean, I suppose it goes with the territory, but I think the kinds of work that people do are fine. Motor control is one of those things that we've put in the bucket of only humans can do it. But if you have a job that rests on nothing but yours, not your creativity, not your intellect, not your empathy, but only your fine motor control, like being a cashier and telling us.
Right. Like if rotary takes creativity, not just fine motor control, you know. Being a cashier and a toll booth, actually a terrible job. People with that job are miserable and have high rates of mental illness and other problems.
So it turns out these jobs that only leverage the parts of us that are easily replicated, say by a machine, are the worst jobs that are bad for us humans. And that the work that depends, that relies on our. Maybe the most fulfilling work for people.
Sam: Yeah. That's a beautiful view. The hints that something should be robotically automated. I always like the 3 Ds. Is it right, is it Dull, is it Dirty, is it Dangerous? Is it a combination? Well, none of those are good uses of our precious human life. So let's give people the tools they could, they could use to elevate.
So on that note, we're short on time. So I'm going to ask you our favorite closing question, which is there's a whole field of folks in our community and open source data who are. Just coming into the job market, right. They're graduating out of the jaws of COVID or they're their early career, right. They've kind of had their first couple of forays, a couple of two year tours of duty on different teams. And they're trying to make sense of the next decade. What does their career look like? And if they've been inspired by some of the things that you've talked about, what's a resource, or a link or a piece of advice that you would leave them with?.
Jocelyn:
When you frame it like this, I do actually have a very strong opinion. I'll go with the piece of advice, because I actually teach a class at Stanford for at least for CS majors who are about to graduate and start their careers. And so I actually am super opinionated.
I mean, first of all, you should work at the intersection of what you're passionate about, what you're good at. Like this is generically. True, good advice. I didn't invent that. I got that at Facebook.
But for lots of people, it's unclear what their passion is or what they're good at. And when you're a new grad, those two things are hazy. And so what I tell people is sort of, if you have a dream pursue it - by all means. But if you're trying to figure out what that is, if you don't know, like the follow your passion stuff is pretty unhelpful optimize for growth and you choose the first job when you choose what, what should make you leave the first job and get moved to a second job is, is if you feel your growth plateauing or slowing, because if you think about your personal growth in your career - And I don't mean like for title or comp. I mean, for building your skillset and your knowledge and your wisdom about how to solve problems and ship products.
And if you are growing in those things, you're going to feel great and be fulfilled. But even more than that, even if you're working on work that is not your passion, like, first of all, you're gonna learn something about what is, and is not your passion from doing it. But second of all, when you meet the work, that is your passion, you'll have all this great capacity to throw at it.
So when you don't know what else to optimize for, optimize for growth and the time to leave a job is when your growth has stopped.
Sam:
That is awesome advice, Justin, thank you for your generosity of time. This has been a great conversation and thanks for your, your openness of mind and heart.
Jocelyn:
Oh Sam. Thank you for inviting me. These are all the topics I could chat about all day.