59 minutes 54 seconds
🇬🇧 English
Speaker 1
00:00
I would like to introduce Jared Friedman, my partner, and his esteemed panel, who he will introduce to talk about technology. Thank you.
Speaker 2
00:10
Thank you, Jeff. Okay, well, I am super lucky to have a very esteemed group of guests with me here today. Everyone on this panel is a tactical founder of a really successful company.
Speaker 2
00:25
Everyone here has built a really cool company and a really amazing technology organization. We changed the name of the event for today. It was originally called CTO Advice and on the advice of Lillian, thank you so much, we have changed it to Tactical Founder Advice, which I think is a better description. And I'd also like to thank everyone on the Startup School forum who wrote in with questions for the panel.
Speaker 2
00:50
We posted a few weeks ago and solicited questions. We got over 150 responses. So thank you, everyone, who wrote in with those questions. We're going to do our best to cover as many of them as we can.
Speaker 2
01:00
And then at the end, we'll open it up to the in-person audience for some questions as well. Okay, so let's get started. Could we start by having everyone introduce themselves and tell us about your company and about your technology, like what's your tech stack, what's some of the interesting technology that you build, how does your technology organization look like today? Ralph, you wanna start?
Speaker 3
01:30
Hello everyone, My name is Ralph Goody. I'm CTO and co-founder of Plain Grid. Plain Grid, we're 350 people based out of Mission, San Francisco.
Speaker 3
01:39
We write beautiful, easy-to-use software for the 17 trillion dollar construction industry. So what that looks like to you, the analogy I often use is more like GitHub for construction. Construction has blueprints. Blueprints change rapidly.
Speaker 3
01:52
Version control is extremely important. If you have changes, that means there's issues that are happening. Issues need to be tracked. And then we build collaboration tools on top of that, too, as well as a lot of other tools for the construction industry that go into some deep jargon.
Speaker 3
02:06
Our stack, we're based on AWS. We used to be based on a variety of other things. We had to move everything into AWS over time on our back end, mostly Python on the back end, though we've got some Go and other things. And then 1 of our challenges is we actually write native for every platform.
Speaker 3
02:20
So we've got iOS, Swift, Objective-C, Android, all Java and Kotlin, Web React, and then Windows. We have a full Windows app as well, which is .NET.
Speaker 4
02:31
Cool. Oh. Hi, everyone. I'm Calvin Frenchoen, CTO and co-founder of Segment.
Speaker 4
02:39
Segment is a single API to collect, organize, and adapt all of your customer data into hundreds of downstream tools you might be using. Whether that's an analytics tool like Google Analytics or Mixpanel, maybe a customer success tool like GainSight, maybe an email tool like Customer.io, Segment helps you collect data from where it lives and get it where it needs to be. In terms of the company overall we're a little over 300 people right now. We have our headquarters in San Francisco but we also have offices in Vancouver, New York, and Dublin.
Speaker 4
03:14
And the engineering product and design team which is kind of how we build product here at Segment, is a little over 80 or 90 right now. In terms of our tech stack, we're also built entirely atop AWS. In terms of how we run our infrastructure, we run on top of ECS, Amazon's container service, and pretty much all of our different services are containerized. Today we're running about 300 different microservices, which are kind of piping together various Kafka topics, and reading and writing, transforming this data, getting it to where it needs to go.
Speaker 4
03:48
And our back end is primarily written in Go.
Speaker 5
03:52
Good morning everyone. So my name is Diana Hu. I was the founder and CTO for Azure Reality.
Speaker 5
03:59
And we're building the back end for augmented reality. And I say I was because my company just got acquired by Niantic, the makers of Pokemon Go. So what we were building at Azure, and now we actually continue doing it in Niantic, is building this backend technology for augmented reality to enable developers to build AR experiences as easy as possible. So we handle all the complexity with the computer vision algorithms, with all the interaction with the back-end, all the hard parts of getting things to render, so that developers could build easy AR apps in minutes.
Speaker 5
04:36
So that's what we do. And we provide a lot of advanced AR features, like multiplayer, persistence, and cross-platform, so that it's easy for developers to just develop in, let's say, on Unity, and it just works. So in terms of tech stack, it has been a bit of an adventure. Back at Escher, we had a service that was hosted in AWS, but that didn't matter too much because we had everything in Docker containers.
Speaker 5
05:03
Now, Niantic is a heavy user of Google Cloud, so we moved everything to Google Cloud. But we have a lot of the bulk of the code is actually, for us, native means more C++, literally, because it's a lot more efficient for us to write the code and a lot of the algorithms once, and then cross-compile it across all the architectures for Android or iOS, and even the backend, so that we run some of the CV algorithms, we could prototype it in the phone, and then we could easily move them into the server because our server is actually also written in C++.
Speaker 6
05:39
Hi everyone, I'm Lillian. I am COO, not CTO, of SecondMeasure. So SecondMeasure, we're about 50 people.
Speaker 6
05:50
We got started back in 2015 and we're based in San Mateo. What we do is we analyze credit card data. So basically we take billions of credit card transactions and try to build a daily view into public and private company performance. We analyze these billions of transactions automatically every day, adjust them, clean them, enrich, normalize them, and then service that in a front-end application for our clients, which include VC firms, hedge funds, and big brands like Blue Apron or Spotify to check trends, do any sort of competitive intelligence or customer intelligence.
Speaker 6
06:31
So we actually don't have a CTO, so that's a fun fact about us. And that's why I am here on the panel today. And myself and my co-founder are actually both technical, so that's been pretty fortunate. We don't have to deal with as many of the challenges around starting with non-technical founders.
Speaker 6
06:54
So I mentioned that our team is about 50 people, primarily technical, so our technical organization is about 30 people, and that's actually split evenly between data scientists and engineers. And that's probably something that makes us unique, is that our core product is actually the data itself. So our data scientists and engineers have to collaborate super closely on a day-to-day basis. Probably 1 of the things that has technically been interesting is a lot of our users want to explore and dive deep into our data.
Speaker 6
07:29
Building a front-end that allows that flexibility of exploration while still creating a good user experience basically requires us to figure out how to rapidly query data and run really complex queries on our back end. You asked about TechStack, we're also primarily on AWS. So for our pipeline, we leverage services like Lambda and Spark. And then for our front end, we're React-based and leverage columnar data stores on the backend to service queries, so think Redshift.
Speaker 2
08:05
Awesome, guys, that was super cool. It's so cool the diversity of different kinds of technology organizations and products that are here today. Okay, so most of the companies in startup school are really early.
Speaker 2
08:20
So I want to take everyone here back to their V1, the very first version before you'd actually ship anything. Tell us the story of how you built your V1. Who built it, how did you built it, how long did it take you to build it, and what things went wrong in the process?
Speaker 3
08:41
I guess I'll start. This is a great story. I've actually got my CEO and co-founder over there watching me right now.
Speaker 3
08:47
And so that's kind of where-
Speaker 2
08:48
Say hi, Tracy.
Speaker 3
08:49
Yeah, hey Tracy, how's it going? So that's where we started. Tracy told me about her idea of putting blueprints on an iPad.
Speaker 3
08:55
This was in 2011, right at the launch of the iPad. So it was very early for the technology. And I said, yeah, no problem. I can put blueprints on iPads like a PDF.
Speaker 3
09:04
That's no problem at all. So I was attempting to impress her really quickly by putting blueprints on an iPad. And it turns out there was actually some hardcore challenges with putting these giant images on a really low Computer availability on the iPad. There's no graphics chip at the time.
Speaker 3
09:20
So basically these are 20,000 pixel images and they would just crash or Overrun the video memory and they're in PDF which is a kind of painful format to deal with So the first prototype sucked is really slow and that actually told me that there's something real meaty here there's something that's actually a challenge and it took me about a month to write the second prototype based off some my background actually I used to work at Pixar so I had some graphics experience there didn't use any proprietary stuff but use some off-the-shelf computer graphics knowledge to write the first Blueprint viewer that ever ran on mobile. So that was our first prototype back then. And I think the best thing I remember back on it is the only thing we really focused on, the first prototype, was what was technically impossible. There was no way you could load Blueprints onto an iPad at the time, and that was our prototype, which meant that all the data got there by side loading, which meant there was no web interface.
Speaker 3
10:12
And the little crappy web interface we had had a delete button right next to a publish button that was not a pixel apart and there was no confirmation on the delete button. So we actually as a team we managed everything on the back and we kind of like behind the curtains loaded all the documents for our first 30 or 40 customers but they all they saw was the the prototype that was fast and did what they thought it would. They never saw the man behind the curtain, so to speak. So that's a little story of our first prototype.
Speaker 4
10:42
Cool. I'll share a little bit of a story of our V1, and then maybe our V1 next. I think for those of you who have seen the Startup School talk where my co-founder Peter talks about finding product market fit, he goes into the story in way more detail than I could today, but I can share a little bit of our journey. So when we first started we were in the YC 2011 class, which sounds like a dinosaur now in terms of YC years.
Speaker 4
11:10
Yeah. But at the time, we were actually building something completely different from segment today. We were building this classroom lecture tool which would help professors get feedback about what their students were thinking in the middle of class. We were students at the time, we figured, hey, this seems like a great idea.
Speaker 4
11:29
Professors will sometimes go on for 5 minutes and they'll lose the entire class and the class gets confused and just waste everyone's time. Let's solve that problem. We built it out over the course of the summer and we deployed it back in Boston in the fall and the whole thing just kind of crashed and burned. Students would go to Facebook, YouTube, Wikipedia, like in retrospect all the things that you would assume college students would do if they have a laptop in front of them in a lecture, but we didn't quite see it that way.
Speaker 4
11:57
So we started looking around for a new idea because we've just raised a seed round and we say, okay, what are the problems that we have with this college lecture tool? And 1 of them was that we couldn't answer questions about how people were using our tool. We couldn't tell you how College students at Harvard were using the tool differently than students at MIT. Or we couldn't tell you how anthropology classes used it differently than biology classes.
Speaker 4
12:24
And so we switched again to something which also we don't do today. But effectively it was a competitor to Mixpanel or Amplitude. It was this tool that would allow you to cluster your users by different rules and break them up into segments, which is where the name came from. And from there, you could then reach out to them or figure out what your most engaged users were doing.
Speaker 4
12:46
And so we built out that system for about 15 months. The whole time we were doing revs on the backend infrastructure, we were building it out, and we were trying to get users, and no 1 wanted our tool. And so we arrive 15 months later, we actually move back to the Bay Area after living in Boston for a year we go and talk to PG and we tell him about kind of our whole saga that's been running for the past year and a half and he listens to our story and he listens to everything that we've done. And when we're finished, he just sort of pauses, I think it was maybe 400 feet out there, walking around the roundabout, and he says, wow, so you just burned half a million dollars, and you're still at square 1.
Speaker 4
13:30
We're just like, yes. He said, well, on the plus side, you still have some runway left, so try again. Launch something new. And at the time, we had this internal tool that we had been using as kind of a growth hack to get ourselves more users.
Speaker 4
13:45
And the whole idea was that you would use the same API to send us data as you would send to Mixpanel, KISSmetrics, Google Analytics, all these other tools. And people actually liked that single API and they were contributing to this open source library on GitHub and starring it and using it. And so my co-founder Ian said, hey, why don't we turn this into a product? Why don't we launch this and see what happens?
Speaker 4
14:06
And my co-founder Peter said, oh, that's the worst idea I've ever heard, it will never work, he's CEO now, but he's seen the light. And so he said, okay, I've gotta figure out a way that we can test this idea incredibly quickly. So we cleaned up the library and we launched it on Hacker News, and to our surprise, it actually got about 1,000 stars that first day on GitHub. And so our real V1 was that first week taking that library, and we basically just had a landing page up which said hey if you're interested in a hosted version of this leave your email and about 500 people left their email and we built out that v1 over about a week where just everyone was in mad hackathon mode and building out the product which we launched a week later and so today I think there's actually no parts of that product which still live on today but it was enough to get us kind of the seed of product market fit and then start getting us a user base.
Speaker 2
15:03
So 1 week is the short. 1 week, 1 week. Everybody hear that?
Speaker 2
15:07
1 week. Okay.
Speaker 5
15:10
That's kind of amazing, 1 week. So our story is definitely more than 1 week. So me and my co-founder have been thinking of building different things, and 1 of the technology trends that we really saw was, it comes from a robotics background with a lot of experience with SLAM.
Speaker 5
15:29
SLAM is this algorithm called Simultaneous Localization and Mapping, and it's 1 of the core algorithms that run today in AR, which helps tell where your camera position is while you also build this map of the world. So what happened is there's this category of algorithms that have been used mostly for robotics. Now, AR had been kind of tried and many times, but never really been picked up. There's AR with markers and that never really took off.
Speaker 5
15:58
And then we thought about why don't we bring these algorithms and run them on the phones. And we got to the point where we thought we could do it because the computation at that point with phones, for example, for iPhone 7, has the same computation actually as a MacBook Pro from 2013. So that's, if you think about that, that's kind of amazing in terms of all the trends with Moore's law and some of the ones with power efficiency and compute. So we decided to go do it.
Speaker 5
16:24
So we were, at that time I was actually still working and I was leading a data science team back then and say, okay, I'll do it because I was thinking of doing some transition. And I was working part-time with another engineer on this, 1 of the engineers that we kind of got excited to work with us. So we struggled a lot to try to get a version working. The first 1 was very duct tape and it wasn't very encouraging.
Speaker 5
16:48
It was only working at 5 frames per second, which is terrible because if you want a good AR experience, the whole thing is that you need it to render nicely and be at least 30 frames per second. So that was very discouraging in a sense, and then because it was all kind of put together, this and then we started kind of really removing all the research code out. And then it started working and seeing a lot of promise and it was running then at at least 30 frames per second. Note that when we were working on this, it was at least about a year and a half before ARKit had launched.
Speaker 5
17:23
And then we were excited about this. So we were both more from the technology side, so we didn't know the product market fit, or where to go with this. So we had to find the market. So the exercise we did is actually we went in and interviewed a bunch of people that we thought would use it, and we found a really good fit with gaming.
Speaker 5
17:43
So we decided to focus on that, and that was our YC application. And the thing that really took us, and this took off, and this is actually, I guess a good story too, is the first day of YC, Apple announced ARKit, which is basically what we had built. And that was sort of a moment of truth for us. It's sort of this really moment where you are looking at yourself and it's really go flight or fight.
Speaker 5
18:10
So we decided, why not, let's just take on Apple. So we decided to just double down and we had this idea that the start wasn't just on the device, which we had all our demos and technology was just working on the phone. We didn't have anything on the back end yet. So back in YC, we got it working with a back end and also with Android and basically accelerating the roadmap that we thought it was about a year, shrink it and got it done in 3 months.
Speaker 5
18:35
And that was the thing that once we put it out there and let people sign up if they were interested, we got about a thousand developers signed up in about a week. So they find, okay, I think we found something. And then after that, we got a lot of interest. And among those, Niantic was 1 and the story is we got acquired later.
Speaker 5
18:55
So I started the summary.
Speaker 2
18:57
So how long was it from the time that you started working on the like original Slam algorithms until you had a working product in users' hands?
Speaker 5
19:07
So we were not really working full-time. It was probably about a year of part-time. And even by the time it was part-time, it wasn't really working, so it went full-time.
Speaker 5
19:20
I took another 6 months of just me and another engineer to do it. And then during the summer at YC, we hired 2 more engineers, and that's when we were able to really accelerate and get it working well.
Speaker 6
19:37
Cool. So for us, our V1 probably took, assuming full time, 2 or 3 months to build. And I actually would say that most of that time was just figuring out what our product was gonna be. We knew that the problem that we were working on was gonna be really interesting.
Speaker 6
19:54
Initially, we were focused on investors, basically trying to get them an information edge on sort of any investment decisions that they're making, whether that be hedge funds focused on public markets or VCs focused on private markets. We ran through a few ideas. We were like, hey, do these investors want predictions? Do they just want to know how this company, public company's quarterly sales are gonna hit.
Speaker 6
20:22
And the short answer is yes, but no. That didn't really seem like a really sort of sustainable business model. So then we shifted a little bit. It's like, okay, maybe these investors really, really want to go deep and they just want to cut data any way they want.
Speaker 6
20:40
So we put something together really quickly, put that in front of some of our friends who are investors and found out very quickly through some light user testing that that was way too overwhelming. Our users wanted some guidance. So what we ultimately landed on is we needed to build our own self-service platform where we can apply our opinions around how our customers should be viewing this data and getting quick access to insights. So once we figured that part out, it was actually pretty quick.
Speaker 6
21:12
So my co-founder focused on building the analytics product piece, so the front end application. And I'd say, and together we kind of focused, together we built sort of the data pipeline and sort of the transformations on the data. But I think some important choices that we made early on is, and maybe some that you guys are facing right now, is what technology do you want to use? Like what do you want to build this in?
Speaker 6
21:41
Ultimately, we decided to build our first application in groovy using the Grails framework, which is very not shiny, not that many people are doing that but that was actually the code that we had the most experience in. So my co-founder Mike had built large production scale system servicing hundreds of thousands of concurrent users and pretty much within a week had built the bones of the application. And kind of like some of these other stories here, like none of us sounds like are running our V1 anymore. So it's more important in the early days to be able to iterate quickly.
Speaker 6
22:20
And usually that comes from using the technology that you know the best, and building things in a modular way such that once you actually find that product market fit and know what your audience needs, then you can focus on that area and actually select the technologies that are best fit for that problem.
Speaker 2
22:42
Awesome. Okay, so I'm gonna go next to the very top voted question from the Startup School Forum, which is about the trade-off between engineering best practices, like good test coverage and security and scalability and redundancy versus writing code as quickly as possible and shipping something. So can everyone talk about how you made that trade-off for your V1 and then how it's evolved since then and sort of like the timetable of its evolution as your company grew and those things became presumably more important. And just to mix it up, let's try flipping the order and we'll start with Lillian this time.
Speaker 6
23:28
Sure, so speed is paramount. Nobody's gonna pay you for having excellent test coverage. So, in the early days, you definitely want sort of the minimum, like what you need, right?
Speaker 6
23:43
So assess the risk to your business around sort of security and not having things working. And obviously you want whatever you build to run every day. You don't want to be fixing it, breaking all the time and not actually building. But really it is a balance.
Speaker 6
24:01
And I do believe that initially it's more important to have that speed of development and incorporate, constantly testing and incorporating the learnings into your product than it is to have like the most robust, the most scalable product. You're trying to find something that people will pay you money for or solves a real problem and that takes time and a lot of iteration. That has definitely evolved since we got started. Once you find product market fit, a lot of these problems change.
Speaker 6
24:38
You know, your system gets more complex, your team grows, you have a lot more people contributing code, a lot more ways for your systems to break, a lot more users who are relying on your product. So for us, the way that manifested itself is, yeah, we started writing unit tests to verify, each engineer would be verifying that their code works. Then we introduced CI and CD to make sure that that code would work with the whole system. Our director of engineering, who leads our engineering organization, started developing and formalizing our best practices around code reviews, testing, et cetera, and defining those processes, and then finally building some more controls directly into our system to make it harder to break things.
Speaker 2
25:29
And specifically, like, what was the rough timeline for those things? Are we talking about 2 months after launch, 2 years after launch?
Speaker 6
25:38
So I'd say most of that stuff I was just talking about happened probably in the last year to year and a half.
Speaker 2
25:48
And that's how many years after launch?
Speaker 6
25:51
About a year and a half after launch. And the main reasons for that again were the growth and the complexity of our systems and our technical staff tripled in the last year. So just that alone introduces a lot more need for process.
Speaker 5
26:12
In our case, It was always about trying to get to MVP that was reasonably working well. And all that was definitely duct tape code. It's nothing that I'd be proud of.
Speaker 5
26:25
It's just practice I'd put together and barely working as much as possible. So we were there even when we did the kind of private launch in a sense, when we did the private beta and got customers to kind of sign up. And we were actually had a private beta that we had some number of game developers working with us. And that was also the DuckTag code.
Speaker 5
26:48
The only time we started actually having all the best practices is actually once we joined Niantic, because now we have to integrate into the games which Pokemon Go has hundreds of millions of users, so we're burying ourselves to get a lot of the, we pretty much rewrote the whole code base. None of what we had before is there. It's all rewritten. To bear up to that, expect that number of users.
Speaker 4
27:13
Cool, I think for us at Segment, there are kind of 3 distinct phases of our development. I'd say the first 1 was when we were in full hackathon mode, right, where we definitely didn't have any tests, no CI. I'd say there was an 80% chance that we were just going to throw it out 2 weeks later because it wasn't going to stick, and then we were going to move on to the next thing.
Speaker 4
27:36
I think because we had been burned so much by building out all of this infrastructure and investing in a lot of these ways to provision your infrastructure and write tests and spin up different new services over the past year and a half. And it really had gotten us nowhere. We had no users. It just created this pretty aligning homing instinct for us where we just said, no matter what, the only thing that matters is getting users at this point in the game.
Speaker 4
28:02
So that's where we started, and I think that lasted us probably for about the first 9 or 10 months where we were just focused on rapid iteration, getting more users, making sure that we didn't run out of money, and then the whole thing wouldn't have even mattered. I think from there, the first phase that then shifted was when we brought on our first engineer, TJ. For those of you who know or are involved with the Node community, TJ is 1 of the best engineers I think I've ever worked with. I think 90% of node shops run on some part of his code.
Speaker 4
28:37
He basically came into Segment and he looked around at, well, this mess that we created, the product that we created, and he said, well, there's no way for me to work in this. Like, I have a horrible time onboarding. I think it took him a week to get his entire laptop set up. And so, we kind of moved from this point where the entire development team shares the same tube of toothpaste to now starting to have more and more engineers involved with the project.
Speaker 4
29:04
And when that happened, it really stepped up our game in terms of testing and CI and just reproducibility when it came to running the builds and running the stack, because suddenly we had to expand outside of just the 4 of us. And then I think, so maybe that period lasted another 2 or 3 years. Probably in the past 2, 3 or 4 years from now, or from this point in time. We've gone through another shift where we've started thinking a lot more about end-to-end testing security kind of best practices around handling all of our customers' data.
Speaker 4
29:43
And the real reason for that is that now we actually have a pretty significant amount of both like revenue and customers and reputation to lose. From day 1, we had no users, so it didn't matter if the whole thing went down for a day or hours or whatever it was, just the calculus didn't make sense there. But today we have thousands of customers who are relying on Segment to publish their data, to not lose it, to treat it and handle it securely. And so for us, the investment makes way more sense.
Speaker 4
30:12
Brian Feroldi And that last period, roughly what scale did you have to reach before you
Speaker 2
30:21
started thinking seriously about those things?
Speaker 4
30:23
Yeah, it's a good question. I think when we started having enterprise customers who are paying north of 6 figures per year, That's when it started to shift and we started thinking, oh, we really have to take care in how we're handling this data because these customers are really depending on this to power their business.
Speaker 3
30:44
So I think my story's a little different than the other panelists. This might be interesting. I would say all 3 of those different facets.
Speaker 3
30:50
I think I heard security, I heard scalability, and I heard engineering best practices. We really have to treat them independently at PlainGrid. PlainGrid's used to build all, in California, Most hospitals, most jails, most heavy civil road work were used on all the tech campuses, all of the different government buildings that go up. So obviously security from day 1, I mean, that's key to us.
Speaker 3
31:12
So we had no choice but to always be super conscious of security and scalability for that matter because our customers, the only way we're useful is if we're used to build the project. Like we're the replacement for blueprints, which means in construction, if we're not working properly, like no 1 builds, And not building for a day in construction can seriously impact the bottom line of these businesses. So for us, the first 2, security and scalability, were always key. Scalability is a challenging 1, because I don't know, I'm not sure if there's a way to do it without it being kind of reactionary.
Speaker 3
31:46
You either over-engineer everything and then you maybe never get a product to market or you eventually have to deal with scaling issues. And for us, we tried to architect it in such a way that we would last through the foreseeable future, the foreseeable future would come, and some customer would find the first kind of flaw in the scalability. And we're used, I mean, to give you an idea of the scale, we've probably about, we've customers with projects that are like 500,000 blueprints with a ton of changes and maybe over 5000000 annotations on 1 project. And we work offline as well, so this is all downloaded onto a cache on the device.
Speaker 3
32:23
We can actually, our physical size on an iPad can take up like 100 gigabytes for certain projects. So we have like all kinds of scaling issues we've had to approach, and we always try to balance it between engineering a year ahead of time, but not maybe 3 years ahead of time, which is a trade-off. So, find that trade-off yourself. I remember when we were in YC, we had all these stories of companies that had built the most ironclad architecture and just never had a product to market.
Speaker 3
32:47
So that was on our minds while we were building that. For engineering best practices, our V1, some of that code still exists in the product today. So some of the code I wrote for V1 is still there. I would say it's probably our measure of technical debt as an engineering organization, so that's humbling.
Speaker 3
33:04
But at the same time, I've seen some developers, and we're fully, we do everything cool now, CI, CD, we run integration tests, we've got a UI testing team, so this is feedback I'm giving from the past. But you know if you're a experienced developer it's pretty hard to write spaghetti crap code. You know it's actually challenging. Most experienced developers and experienced, some people are just really good from the get-go, It's normally something through time and writing a lot of production products that you just generally learned the tricks of not writing spaghetti code pretty quickly.
Speaker 3
33:39
And you know some of that code runs without heavy testing. The other thing to mention is to properly scale test our product. It doesn't, I mean it would take days during an integration test to download and upload all the data. So we really have a lot of functional testing to help us double check.
Speaker 3
33:53
But the thing I've noticed with UI and unit tests is we're a big fan of unit tests, we're a big fan of tester and development. But I've definitely seen developers write the most spaghetti test frameworks and the most spaghetti tests where they're just testing their mocks. And they're doing this in the middle of a production environment, you know, like a production need for release. And they spent so much time writing these unit tests that actually were not testing anything rather than getting the product out the door.
Speaker 3
34:17
So engineering best practices are great. There's a trade off between writing a lot of tests and writing good code from the get go.
Speaker 2
34:24
So speaking of test driven development, a lot of the questions were about the various engineering methodologies. Agile development, lean startup, test-driven development, extreme programming. How do you guys think about those?
Speaker 2
34:41
Do you adhere to any particular methodology in your company?
Speaker 6
34:47
I would describe us as agile-ish. So we do not fully adopt any of those methodologies. Basically, we find what works well for our team and aren't super dogmatic about, you know, is this agile or is this not?
Speaker 6
35:03
We run sprints, we have daily stand-ups, so we have some elements incorporated in our development. But I think kind of with anything, a lot of these problems, you're gonna be solving a lot of problems for your organization and you have to kind of find what works best for you. And so I think it makes sense to take bits and pieces that work for you and adapt it to your organization. And then I guess it's just the 1 other thing I'd mention on this is, as you grow, a big challenge is just constantly evolving how you work, right?
Speaker 6
35:40
So it's very different to work on a small team of 4, where everyone kind of knows everything and everything is in everyone's heads versus a team of 30 or 50 or for some of us, you know, hundreds, that is very different. So you kind of constantly need to be assessing is this style of working or this methodology, is this right for this stage of the company?
Speaker 5
36:06
Yeah, I think the point of evolving the process is something we've been doing a lot. So back when we were at Escher, we were only with 4 engineers and me building the product. So back then, it was very messy.
Speaker 5
36:19
We're just trying to move as fast as possible, really. So 0 documentation. Everything was kind of whiteboards and everyone kind of could handle everything in their heads and build it out. And in terms of, we didn't really do sprints because I think that some engineers didn't quite like it as much.
Speaker 5
36:35
And there were just too many big tasks and we just trusted different individuals to go and tackle kind of 1 area and then we would integrate it. But now things are very different. As we hired, I think the team is triple already in the past 8 months for the AR platform product. When you have a bigger team, you definitely need a lot more process to be able to be efficient and communicate because you don't want to duplicate and not everyone knows everything and the system grows in a lot of complexity.
Speaker 5
37:03
So we went from 0 documentation to a lot more. Went from, we used to have a little bit of a CI, but now is very much of a CI that actually builds to all the architecture flavors to Linux, Mac OS, Android, x86, Android ARM, iOS, et cetera, and actually runs all the tests on all of them and actually catches a lot of the bugs. And we test, we have coverage and all those things, but we also have to train and get the team to be okay with having more process. So now we do not quite like a sprint, but we do a weekly planning for the week.
Speaker 5
37:39
And then we reconvene and I think the teams are kind of breaking down into sub teams that work. And people kind of float in and out of different focus areas. We split in 3 main focuses. 1 is sort of the back end, all the work with the back end.
Speaker 5
37:54
A lot of the work on the client is to make a cross platform, the Unity APIs, and the other, the big 1, is bringing a lot of the computer vision algorithms to production. So the funny thing is that everyone in the team has ended up being and trained up to be a computer vision engineer, because that's sort of the core. We have CV algorithms on the server, we have CV algorithms on the client, and the core algorithms that run. So there's been this, how it kind of shaped up to be.
Speaker 5
38:19
But we don't really have a, per se, a process where we do, right now, Niantic does OKR, so now we have OKRs as well, per quarter. So now we're starting to plan longer and longer and have ideas of, before you used to have a rough idea where things would go, but now it's this, this, and this, and this.
Speaker 4
38:38
Yeah, for us, we don't have any set process that teams have to follow at Segment. Kind of individual engineering teams, sort of like what Spotify does, are able to self-organize. They're able to run however they want.
Speaker 4
38:50
If they want to do sprints, that's cool. If they want to use Jira, they can do that. If they want to use Asana, whatever it is, they're allowed to run however they'd like to run. The 1 thing that we've introduced over the past 2 years is similarly the OKR model.
Speaker 4
39:05
For those of you who aren't familiar, this was a model that was developed at Google. It's the idea of O's, or objectives, and then key results. So you have your 1 objective where you're trying to go, and then key results that are supposed to be objective measures of how you get there. Such, if you do every KR, then it adds up to the full objective.
Speaker 4
39:24
And so those are something that we do on a company-wide basis. Every single team in the company gets together and puts together their OKRs basically 1 week before each quarter. And then for that three-month period, they're just executing on that plan. And some teams grade those on a weekly basis and check in and say, how am I doing against these goals?
Speaker 4
39:46
Some teams grade them on a monthly basis. It kind of doesn't matter. But at the end of the quarter, every team is saying, hey, here's how well I did against my stated goals. Separately for the teams that I work with in particular, we end up running kind of just a weekly meeting where at the beginning of the week, we end up planning out what we want to get done, and then we have kind of a daily standup.
Speaker 4
40:10
I honestly don't care too much about the content for those meetings, so long as we're always discussing the most important problems. We're pretty big believers that the tools and process should be there to serve us, not the other way around.
Speaker 3
40:26
My experience at PlainGrid echoes all the other panelists. It's agilish, self-organizing, every team can choose the tools that they want. Maybe some things that would be helpful to you, from the beginning, some things we've always found that's been very useful in every engineering team are, again, the daily stand-ups.
Speaker 3
40:44
You've got to communicate on a daily basis. As engineers, sometimes that can be a little difficult to wanna communicate every day and not just program when you wake up, but that 15 minutes, and you can do it remotely as well over Slack or something, has always been really helpful in just kinda unblocking people. Timeboxing, I mean, I think that a lot of these management practices all roll up into some abstract philosophies and sprints or time box methods where you're just not gonna work on something infinitely. That's very helpful to kind of keep people's realities in check, because often estimates can be off.
Speaker 3
41:14
And you know, sometimes what we thought was gonna take us a week can take us a month, myself included. So timeboxing's a good way to keep categories of that. I'd also say some other lightweight management techniques you can probably employ right now and your team is probably 1 to ones, weekly 1 to ones. Just make sure, hey, if you're talking to people in a group and you're having stand-ups, make sure you have some time to actually connect with people individually and get to learn more about their wants, their needs, their emotions, and their careers.
Speaker 2
41:41
That's great, guys. Okay, so now I want to change topics to the right way to work with non-technical co-founders. And I think the topic that came up most commonly in the Startup School Forum was the 1 that Ralph alluded to, which is how to deal with deadlines and timeframes, particularly with non-technical co-founders.
Speaker 2
42:02
And I think particularly in the early days. So does anyone have thoughts on that? Any orders, Gid?
Speaker 3
42:08
I'll volunteer for this since my non-technical co-founder is staring at me over there. A few ideas here. If you've got someone that's really non-technical, and I'm not saying my co-founder is really non-technical.
Speaker 3
42:20
I'm sure she can use a computer. But if it's really non-technical, this is an amazing testing opportunity for you. I mean, I remember I would write this software and I'd be so happy about it. And I'd hand it to her, and as soon as she touched it, it would break.
Speaker 3
42:35
I mean, I don't know what she did. She shook the iPad, she rotated it 3 times, but she had a way to break the stuff I wrote, and that was amazing for testing in the beginning. So that's 1 way to engage your non-technical co-founder. You eventually have to learn how to start pouting your deadlines.
Speaker 3
42:49
That's really hard to do. I never got very good at this. I'm always optimistic. Even to this day, I'm like, oh yeah, it's going to take me a week.
Speaker 3
42:55
So I never really got good at this. I'm just aware that I'm optimistic about it, and then I'm always kind of off by a week, and she's aware of that as well. So eventually I've talked to other engineering leaders, they keep 2 books, they keep like a separate set of books they talk to their other technical co-founders, non-technical co-founders about, say okay, here's the deadlines here, and then you have another set of books that's actually when you think you're gonna get the job done. I think 1 thing that can be really helpful is having a process, some very lightweight process of like here's where we're like testing and then here's release.
Speaker 3
43:24
Because often some people might not realize that iOS has a release cycle that's not controlled by the developers, it's controlled by Apple. So you gotta pad that into your plan, you have to plan a little bit. So I found that a little bit of project management goes a long way when you're dealing with a non-technical co-founder. And even better, if they're good at project management, that's a great place to employ them as well.
Speaker 4
43:45
Yeah, so all of my co-founders are technical, so we didn't really run into this problem in the early days. Kind of everyone's like, oh yeah, I know what's happening here. If it slips, we know exactly why it wasn't as big of an issue.
Speaker 4
43:56
I found more recently with deadlines, kind of the trick that I've started using is in one-on-ones actually, asking each person what they think will happen over the next 3 or 4 weeks. And what I find asking one-to-one is that, 1, people don't anchor against each other's answers, so you get a very unbiased view and it forces each person to think about the near term future, what's going to happen. And second, each person is much more wary of the parts that they didn't work on. And so you get kind of a sense of like where there might be trouble spots.
Speaker 4
44:29
And typically the end result is sort of like a wisdom of the crowds where it's kind of the average of all 3 or 4 answers. And so I've started using that as a way to get a better sense of deadlines where each person has their prediction or mental model of what's going to happen and then you kind of average them all out to where it should be?
Speaker 5
44:50
Well, I guess in my case, my co-founder didn't work in engineering or build products or ship products, mostly just in research. So, I've been doing mostly a PhD. So in that sense, it's kind of a little bit non-technical in the sense that not having the experience of building and shipping products is mostly just kind of the algorithm side.
Speaker 5
45:10
So there was a bit of that work to kind of understand why certain things take long and why they need to be built a certain way, but at some point it was more, he's just letting me handle all of that, and he got engaged mostly on the external communications of the business, trying to find partners and all that, so that's what happened. Sometimes that could be the split where, as a technical founder, you end up owning the product and other development and having clear communications and at least trying to set expectations with deadlines. That's the hard part. There's always, I think this is always the trade-off between engineering and product and business, right?
Speaker 5
45:50
The trade-off on when to get things and be able to communicate that clearly.
Speaker 6
45:58
So my co-founder is technical, so I don't have much to offer here. Not much to add either on the deadlines and time estimates to do your best guess, do some padding. I'd say 1 thing kind of on the non-technical front though that I do think is worth mentioning is, so today a lot of you are probably building your products and writing code, especially if you're 1 of the technical co-founders.
Speaker 6
46:27
At least in my experience, that changes. So I pretty much stopped contributing to our code base about a year into the company. And it's for us, kind of as leaders, well I can't speak for all of us, but at least for me, there's a pretty big shift that happens once you start gaining traction, where it's less about building a product for a market and finding that fit, and it shifts more into building an organization and a company and a system, like this system of humans who are going to build something long-lasting and great without a lot of your involvement. So I guess what I'm trying to say is, you know, at some point you may end up doing a lot more non-technical things.
Speaker 6
47:11
So get ready for that.
Speaker 2
47:14
Yeah, that's great. Okay, so a lot of the companies in startup school are at the phase where they are trying to figure out the right way to structure their early engineering team. If they want to hire people locally, if they want to hire people remotely, if they want to hire only employees who work full time, or if they want to hand off pieces of the product to contractors or to third party development firms.
Speaker 2
47:45
Can you guys talk about how you would think about that if you were starting a company now? Advice that you would give them. And I think this is gonna be the last question and then we're gonna open it up to the audience.
Speaker 6
47:57
I can start. So we did not start with contractors. We started by building, like hiring full-time employees.
Speaker 6
48:06
I think the main guidance I would give here is you are building a product and a company and you're developing these core competencies. I'd say for the most part, if you end up contracting that out, you're actually not just contracting out through that technology expertise, but you're learning a lot in these early days of what's going to work for your clients, what the product actually is. And so moving that, sort of having that external to your company, I think is a really big loss in the early days just in terms of all that knowledge. I could see an argument if you're just like getting off the ground and you're doing contracting kind of as a way to evaluate potential new hires.
Speaker 6
48:49
I could see that definitely being a path. But again, that's sort of with the intention of building your team. On the outsourcing front, I think it's a similar thing. You really need to look at what is your core competency, that's what you're investing in, what's your IP.
Speaker 6
49:05
You don't want to outsource that at some point. If you do, you're pretty much just a marketing company. And so I'd say if you are in different business models, might have different requirements. So for some of you, it may make sense to outsource a large part of what you're building.
Speaker 6
49:25
But for us, we have experimented a little bit with outsourcing. I'd say our approach has really been to make sure those projects are very well defined and not on the critical path so that we can experiment. It is a different type of project to manage sort of outsourced talent. And that way you can sort of learn, see if that works for your business.
Speaker 6
49:48
And if it does, great, and if it doesn't, it's fine. Was there also a question about remote?
Speaker 2
49:53
Yeah, local versus remote employees.
Speaker 6
49:56
I think that depends on what type of culture you want to build. So for us, it was really important for us to have our team together. So we hired up a local team, and then sort of in the last year or so, we have actually started tapping into remote engineers.
Speaker 6
50:15
There's definitely a little bit of a culture shift with that. We're still predominantly local, but there are a lot of advantages, including being able to tap into other talent pools. And it has actually turned out to be a really good forcing function for documentation, explaining your code, explaining your decisions, which is good for a number of reasons, including just like as you grow, communication is 1 of the harder problems you have to solve.
Speaker 5
50:46
So I think for us, because so much of the core of the technology is what our company is, we didn't quite outsource any of that. We did contract people as more of an interview process. So we would work with and contract someone for a month before we gave them an offer.
Speaker 5
51:03
So that's what we did. But we did outsource more things that we really thought they were not essential to our company. For example, building 3D models, some of the design, some of the, actually writing technical documentation at some of it, websites, things that were, we could do it if we had the time and wanted to. We knew we could do it, for example, like I could build a website, but our time was better spent on the core tech, So those sort of things we did outsource.
Speaker 5
51:34
And in terms of remote versus local, because everything was so complex with what we built, we chose as a culture to be more all on site. And to this day, we still are building the team locally. But I think you could handle, there's a lot of stories of companies that do remote, but you have to kind of start remote first.
Speaker 4
51:53
Cool, yeah, I can share a little bit from our story. So in the very early days, we basically only hired full-time folks. We didn't experiment with contractors.
Speaker 4
52:04
Kind of the only case where we'd outsource is when AWS could take some piece of infrastructure that we're building or some new product feature and we could build on top of that. I think that was still probably the right move, especially in the early days. It feels like a startup is really fragile and you want a bunch of people who are just all pointed in the same direction and really kind of in it for the long haul. On the local versus remote piece, we actually started hiring pretty much only remote people.
Speaker 4
52:32
We're really involved with a lot of different open source communities on GitHub, particularly there are some in Node, in the very early days, some JavaScript ones, Component, et cetera. And so, These were people who we had been working with and collaborating with for years at this point, who then, when we were ready to start hiring, we just said, hey, would you like to work more closely together? I think it came with its own set of trade-offs. On the plus side, we had access to these people who were insanely talented, not in the San Francisco Bay Area, but who were used to working with us already, and who we very clearly built an insane amount of value for a segment.
Speaker 4
53:15
I think the trickier part for us to get right was around the communication piece and all kind of being pointed in the same direction from a product perspective. So while we grew remote from about the first 10 or 15 hires, we then only grew locally through about the next 70 and then only recently started opening remote backup. I think now it's because we actually have enough bandwidth to create a really good remote culture where we're putting an emphasis on that documentation, that communication piece, and We're actually able to hire folks who have been working remotely for years as well.
Speaker 3
53:50
Some really good advice given on this panel so far. I think the trade-offs are the thing for you to keep in mind the most. I was thinking back towards contractors, and I'll tell you a quick story of a contractor.
Speaker 3
53:59
Our first 2 hires were actually, as I was thinking about it, contractors. 1 of them helped us organize our bills and our office stuff, and the other 1 helped us develop some technology. Someone I worked with when I used to work at Johns Hopkins. Even though he was a contractor at first, we needed money to pay him, because he was my friend, so we paid him in equity.
Speaker 3
54:22
And he joined us as our first engineer eventually, and helped us build some really core technology for our company, and really was a great addition to Plain Grid. So I think that even though I wouldn't suggest hiring a contractor immediately, we just did what was right for us at our company size and what we needed right then was like, hell yeah, I need this guy to write some stuff for me and he'll take equity as payment, wonderful, because I don't got anything else to pay him. So I mean, I think just be open-hearted with what you think is needed for your business right then. Know there's a lot of trade-offs with contractors.
Speaker 3
54:51
The other thing to note is this was my friend. We've had contractors I've hired as well. There's this ignorant feeling, I can't, even now I feel it, when you hire a contractor that's like, oh great, I don't have to manage this person, and that is so not true. Management of contractors is significant time for you as a founder.
Speaker 3
55:07
That is not a free hire. It sometimes is worse, actually. It's more management time for a contractor than a full-time employee. So Keep that in mind, as well as the remote thing is also an interesting topic.
Speaker 3
55:18
When we started our company, we tried to aim to have people in San Francisco, but yet I realized actually as co-founders, we split off for like months at a time. We'd be like, I'm gonna go back to Chicago for a month, and then 1 of my co-founders would work out of Chicago for a month or 2. And that was totally cool as well. We've gone back and forth on this over our 8 years, over and over again, between allowing remote, not allowing remote, allowing remote when it's a friend or a really good, you know, best in class IC that can really develop a lot of things.
Speaker 3
55:48
Now we're completely open as a company. It doesn't matter. We hire managers remote. We hire low designers, products, everyone.
Speaker 3
55:53
It doesn't really matter. We're just looking for the best talent we can get. I think the best advice you've gotten already is just do what's right for you at the size of your company and know that there's trade-offs with all these decisions, but there's no easy answer for any of these questions.
Speaker 2
56:06
Great, okay, I think we've got time for 1 or 2 questions from the audience. Questions, questions, 1 over there.
Speaker 4
56:17
You, yes. So if you were to start another company again, how would you accelerate the development process? What would you do differently?
Speaker 2
56:27
Okay, I'm just gonna repeat the question so everyone can hear it. If you were going to start another company now, what would you do to accelerate the development process? Any takers?
Speaker 5
56:38
Yeah, sure. I mean, I think we spent too much time building technology on my side to try to really get that product market fit as soon as possible. And it's a challenge in terms of the areas that you pick, because if you're in a kind of frontier tech space with like AR, VR, or let's say mobile tech, those lead time could be really long.
Speaker 5
56:57
Or think about what kind of company you want to build.
Speaker 6
57:02
I'd say get in front of users all the time. Find your user base. It's really, it's actually a lot harder to build a product in this vacuum where you're just like in your head, maybe you're ideating with your co-founder and you're like, what if we do this?
Speaker 6
57:18
What if we do this? These all sound great. No, this idea sounds bad, but it's like way easier to just put it in front of somebody and have them say, this is great, or this sucks. That feedback loop is invaluable.
Speaker 4
57:31
Yeah, I'd echo the sentiment on users 100%. Get out in front of them, start showing things to them, start, make sure you're digging deep to make sure that you're really solving their problem. I think if I were to start again today, actually I'd build 2 or 3 versions which I just intended to throw away.
Speaker 4
57:48
Maybe this is biased on prior experience, but I think the fact that you can get reps in a very safe manner, where then when you're really ready to start, you can just kind of go nuts building out the V1, I think is really powerful.
Speaker 3
58:04
You know, I love mobile development, I love Swift, I love Kotlin, but I don't think I would choose to have to write 4 different platforms. If I was gonna write again, I probably would have picked 1 of the cross-platform development libraries. I don't know which is best.
Speaker 3
58:16
They all, you'd have to tell me if anyone has like, after this, give me your strong opinions of which 1 of this is best. They all seem to have trade-offs. Probably the best advice I could give though is I'd stop coding a little bit earlier and I would start hiring a little bit earlier because I think there's a lot of me that wanted to passionately write those features and if I had instead been passionately hiring other developers, we would have built faster, I feel. Interesting.
Speaker 3
58:39
Okay.
Speaker 2
58:40
1 more question over there.
Speaker 5
58:43
As a non-technical founder, What do you suggest to communicate with the CTOs regarding the implementation of new features where the CTO or for instance the technical founders would insist on stability and scalability and you're driving features given by the users. What would be the communication style or?
Speaker 3
59:06
Get them in front of the users. Bring them to the meetings, get them over the phone. Well, then give the questions, you know, kind of set it up a little bit where you're trying to bait the users into giving the feedback that you're looking for, or you won't get the feedback you're looking for, and maybe there's some adjustment there, too.
Speaker 3
59:20
So get them in front of users as much as you can. That's the most powerful tool you have, in my opinion.
Speaker 2
59:28
Okay. Thank you all so much this was great
Speaker 5
59:45
you
Omnivision Solutions Ltd