That’s not a bungalow. Am disappoint, Onion.
I am sad for the tens of millions of millennials who will dutifully care for their asshole parents only to discover the house goes to the reverse mortgage company and the savings are long gone. Fucking boomers, absolutely immune from consequences for their entire lives. Trump as President is their perfect departing fuck you.
Lol SoftBank.
Yeah I’m fairly certain that SoftBank is the fish at the vc game.
I mean it’s literally their name
One of the unsolved problems is how to efficiently consume and analyze this data meaningfully. It’s on the order of many, many thousands of TB’s. I may be wrong here, but I think a major problem is that there is no database out there really fast enough to use this data in split-second latency crucial situations like an automated vehicle.
My company is trying to solve this data problem. I don’t think we have the answer but our software is currently being put into some VERY interesting applications (like IOT edge devices in truck fleets).
I mean the whole database is obviously huge and that creates significant logistical problems if you’re trying to use it all at once… but can’t you just pull the local area relatively slowly and then use what you pulled rapidly?
I feel like there are tons of situations where the size of the whole database is a problem (and IoT where you’re tracking thousands/millions of objects in real time across vast geographical areas is definitely one of them)… but I hadn’t really considered self driving to be one of them.
I’m probably showing how not technical I am, but how is pulling the local maps into local RAM for instant use something you have to do fast? (I’m genuinely asking, no offense meant I really have no idea what I’m talking about)
Yes that’s pretty quick. But there’s a lot of IOT stuff in self driving vehicles. It’s one of the areas our sales people are always trying to generate leads so I know there’s a data problem somewhere there, unfortunately that’s about the depth of my knowledge about it.
I think self driving cars are going to eventually be a winner take all business. The actual cars will become capital intensive, low margin commodities and google will have a monopoly on the software, plus additional ad revenue from people being in the internet instead of driving, plus all the data about who is where, when.
We really do need some sort of Data Rights legislation ASAP before things like that become too big to reverse (if they aren’t yet)
Yeah if we aren’t careful we might get a situation where one giant and unaccountable company controls basically all the information flow on the internet, scary stuff imo.
We’re most of the way there
I don’t mean this is response to self-driving cars - some of that data is very important - but the only defense of big data is probably flooding the market with bad data.
Cars are already capital intensive, low margin commodities.
I really don’t see Google having a software monopoly though. It’s not like we have all the technology in place, but are just waiting on the perfect algorithm to bring it all together. There’s a lot of hardware involved as well. The development is moving at a near tortoise like pace. I just don’t see this area as being ripe for some tech breakthrough that accelerates self-driving into the marketplace. I feel we’ll be stuck at level 3ish for a while. The days of being able to get into a car and fire up netflix on your way to work just don’t seem like they are there. Although I’ve worked in this industry for a while, and never have I seen auto companies dump money into unproved tech like they have with autonomous. They are certainly acting like there is a pot of gold at the end of this rainbow.
The size of the data is not a problem. The big data is used to train the various machine learning models, which are then loaded into the cars for runtime use. The models are basically pattern matching very fast against the known inputs, kind of analogous to an index for a relational database.
For example, at drive time, the cameras feed images into one model, and when the model pops out HAZARD (P > x), the vehicle slows down or stops. It didn’t have to run a big data query, it just distilled the current data into the most likely bucket (HAZARD).
Obviously, this is greatly simplified, and likely wrong in some aspect, but the main point is that building the models based on all that data can take many hours or even machine years of CPU time, while querying the model takes microseconds to milliseconds. The more detailed and coherent the input data sets, the better the models.
I’m surprised you all are not crowing about the WeWork implosion!
Target range down to 10-12bn. Completely brutal for Vision Fund to have a 75% markdown of their investment from just a few months ago.