Wishing all the best to Aquantia for being the first to give us hope for widespread adoption of multi-GB Ethernet, esp. in general/consumer-level/non-enterprise sectors.
Hope such strong collaboration/partnership makes them stronger and brings closer the moment when any non-lowest-end-budget motherboard comes equipped with 2.5 or 5 GbE.
It is not new in the server and datacenter market, but it is still almost non existent in the consumer market - thus the "widespread adoption" mention.
The high resolutions and dynamic range quoted MUST not be required - even a light mist (still OK for human driving) will drop the resolution to VGA levels and dynamic range to 6 bits or less. If the control system cannot handle such conditions then it is not suitable for road use.
You're thinking in terms of the dynamic range exhibited by a single frame of video data. But you want the car to be able to get those 6+ bits of contrast in every frame, even as lighting conditions change quickly and drastically. You don't want to wait for a feedback loop to adjust sensor gain over the course of several frames every time you go into a tunnel.
With small plastic wheels bundled in? Half of the Xavier SOC is for self driving cars only. I think what you actually want is the latest GPU into your shield device when it launches later this year.
One would think fiber optics would be a better fit. Automotive is a harsh, noisy environment. The electrical isolation of fiber would help as well. Also moving processing closer to the respective sensors would help.
Optics can't be bent and molded the same way copper can, especially when it's a tight fit.
I addressed the issue of moving processing closer to sensors in the first paragraph: if you encode at the sensor then decode at the SoC, it adds latency. Now if you want to process frames at every sensor, you'll need an SoC at every sensor, and find a way to ensure that the system can work when one of those sensors is knocked out. At this point in time, centralized processing for Level 4/5 is the preferred fit.
The cameras power too, so even if you went fibre for the data you'd still need to run copper anyway. That'd mean two cables and an extra point of failure.
Optics may also shatter if there is initial but not serious impact in an accident rendering the car unable to function properly for the remainder of the the 'accident process'.
Yeah, kind of like how you've got little brains distributed all over your body. ...not! And while some processing does happen in the optic nerve, that's all about the high latency of nerve fibers.
Distributed processing would add cost to the sensor modules and potentially add cooling requirements or perhaps at least make them more bulky and difficult to mount. It also increases the set of potential failures they can have and increases overall system complexity.
If the processing can be handled in a centralized fashion, then why not? It simplifies a lot of problems, like fault tolerance. Another benefit is that you could upgrade everything by simply swapping out a single compute module.
It's kind of interesting how the evolution of self-driving tech might mirror the evolution of our own nervous system in some respects. Our visual stream has similar latency/bandwidth/computation tradeoffs. For instance, only our foveal vision is high acuity because the optic nerve simply doesn't have the bandwidth to transmit an entire visual field worth of high fidelity data. There are many other insights about information processing that can be gleaned from the mammalian nervous system.
Yeah, you're right. They should used foveal processing and then add motors to pan and tilt each of the car's cameras, like real eyeballs. That would surely be an improvement, plus it would sound cool to hear the car looking around it, all the time, and keep mechanics employed replacing all of those motors.
And then, like real retinas, maybe they can add a blind spot to the sensors. It must be a good solution, because it's the one nature arrived at, and natural designs are always globally optimal and perfect, right? It's why humans are incapable of any perceptual or cognitive errors, so we should design computers to be exactly the same as us. The more limitations of biology we can faithfully reproduce in copper and silicon, the better they will surely be.
But why stop there? Maybe they could switch from 10 GbE to some sort of electro-chemical signalling mechanism, to make the car more natural and feel more alive. Increasing latency and decreasing bandwidth can only be a good thing, since it will make the car's network more like the nervous system of animals, which are the pinnacle of all design in the universe, regardless of biological and material limitations.
Firstly, I agree in pushing for higher bandwidth in all infrastructure. I have gigabit internet, and I wish is was more widely adopted across the internet. Now, does it seem like 3Gbps is a bit high for 1080p 30 at 8bit per channel? Seems like, 10Gbps isn’t even a drop in the bucket if you had 8 or 10 cameras going at that bandwidth. How much compression versus latency are we talking about? Smart phones can record a much lower bandwidth video in milliseconds 30 FPS has 33.3 milliseconds wait period, before the next frame has to be recorded. It just seems like the figures are inflated.
The bit rates in that slide are for uncompressed video. PCs operate in the far right column of 24 bits per pixel, 8 each for red, green, blue. If your computer vision system is monochromatic, then you may be able to get away with far fewer bits per pixel.
Alternatively, you can read that table as indicating the bit rate when compression results in an average of eg. 8 bits per pixel, which would be 3:1 compression if starting with 24bpp RGB data.
This might be the stupidest question in this regard, but: Is there a reason not to use MIPI C-PHY for the sensors, apart from being a standard for embedded stuff (well, the sensors would actually count as embedded, but whatever...), instead of GbE?
I‘m asking because there‘s not much info available (at least openly) about maximum wiring leghts, SNR, distortion in general etc. that could turn C-PHY into a worthless sucker for autonomous driving solutions.
If a manufacturer wants really low latency and high quality imaging, they'll skip Ethernet entirely and just use SDI. No conversion latency from a camera and no network overhead. 12 Gbit bandwidth on copper is possible today with a planning group worksing on a 24 Gbit version.
There is also the potential to move to twinaxial cabling to further reliabillity. Everything automotive seems be based on commodity specs but slightly modified (see HDMI Type-E).
But Ethernet is *so* much more flexible than SDI. You can have all kinds of sensors - not just video - as well as control and multiple devices communicating with them.
And does SDI support 16-bit or 20-bit HDR? I've seen truly HDR machine vision sensors that can output such formats.
Use the best tool for the job. If those other sensors don't need much bandwidth, then vanilla 100 Mbit/1 Gbit Ethernet or USB would be fine. SDI does make sense for the video portion without the need to build a 10 Gbit or faster Ethernet network into a car.
As for SDI supporting 16 bit color per channel, yes. I've worked with such a system whose output was two a true 14 bit per channel LED system. Not sure off hand about 20 bit color but wouldn't surprise me.
So, you're going to run a whole spaghetti of different cables all over the car? Have you forgotten that Ethernet can be switched?
And what about variable framerate? Can SDI support that? What about upstream control? What about peer-to-peer communication, in case something else on the network wants to talk to one of these sensors or vice versa?
Ethernet is the most general and flexible solution. It's commodity (okay, well 10 GbE isn't quite - but moreso than SDI), and there's a wide variety of parts and suppliers. So, when you say "use the best tool for the job", I think it'd be hard to do better than Ethernet.
Find Latest <a href="https://www.careerz360.pk/jobs-in-quetta/>Lates... Quetta Jobs</a> at Careerz360. Search Online & Apply for All Jobs in Quetta across Top Companies. Submit your CV Today to Get Hired.
Breasts are an important part of the female body. They are associated with a women’s femininity and confidence. As a female plastic surgeon, Dr. Perry can more easily understand your needs and goals. Dr. Perry Gdalevitch has an interest in breast surgery and is a specialist in: <a href="https://www.phisurgery.ca/>Breast augmentation</a> Breast lift (mastopexy) Breast lift and augmentation Breast reduction Breast reconstruction Breast congenital deformities Male breast reduction (gynecomastia)
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
25 Comments
Back to Article
dlum - Monday, January 29, 2018 - link
Wishing all the best to Aquantia for being the first to give us hope for widespread adoption of multi-GB Ethernet, esp. in general/consumer-level/non-enterprise sectors.Hope such strong collaboration/partnership makes them stronger and brings closer the moment when any non-lowest-end-budget motherboard comes equipped with 2.5 or 5 GbE.
peevee - Monday, January 29, 2018 - link
10GbE is not multi-GB (GB being GigaByte, unlike Gb which is Gigabit). Its real throughput is almost exactly 1 GB.And there is nothing new in 10GbE.
dlum - Monday, January 29, 2018 - link
Yes I meant multi-Gb (2.5/5Gb or more).Why should it be new? It's still far cry from being standard in consumer MBs.
Santoval - Monday, January 29, 2018 - link
It is not new in the server and datacenter market, but it is still almost non existent in the consumer market - thus the "widespread adoption" mention.Duncan Macdonald - Monday, January 29, 2018 - link
The high resolutions and dynamic range quoted MUST not be required - even a light mist (still OK for human driving) will drop the resolution to VGA levels and dynamic range to 6 bits or less. If the control system cannot handle such conditions then it is not suitable for road use.Billy Tallis - Monday, January 29, 2018 - link
You're thinking in terms of the dynamic range exhibited by a single frame of video data. But you want the car to be able to get those 6+ bits of contrast in every frame, even as lighting conditions change quickly and drastically. You don't want to wait for a feedback loop to adjust sensor gain over the course of several frames every time you go into a tunnel.syxbit - Monday, January 29, 2018 - link
I wish Nvidia would make a new Shield device with the Xavier SoC.Amandtec - Monday, January 29, 2018 - link
With small plastic wheels bundled in? Half of the Xavier SOC is for self driving cars only. I think what you actually want is the latest GPU into your shield device when it launches later this year.Threska - Monday, January 29, 2018 - link
One would think fiber optics would be a better fit. Automotive is a harsh, noisy environment. The electrical isolation of fiber would help as well. Also moving processing closer to the respective sensors would help.Ian Cutress - Monday, January 29, 2018 - link
Optics can't be bent and molded the same way copper can, especially when it's a tight fit.I addressed the issue of moving processing closer to sensors in the first paragraph: if you encode at the sensor then decode at the SoC, it adds latency. Now if you want to process frames at every sensor, you'll need an SoC at every sensor, and find a way to ensure that the system can work when one of those sensors is knocked out. At this point in time, centralized processing for Level 4/5 is the preferred fit.
rhysiam - Monday, January 29, 2018 - link
The cameras power too, so even if you went fibre for the data you'd still need to run copper anyway. That'd mean two cables and an extra point of failure.Amandtec - Monday, January 29, 2018 - link
Optics may also shatter if there is initial but not serious impact in an accident rendering the car unable to function properly for the remainder of the the 'accident process'.mode_13h - Tuesday, January 30, 2018 - link
Yeah, kind of like how you've got little brains distributed all over your body....not! And while some processing does happen in the optic nerve, that's all about the high latency of nerve fibers.
Distributed processing would add cost to the sensor modules and potentially add cooling requirements or perhaps at least make them more bulky and difficult to mount. It also increases the set of potential failures they can have and increases overall system complexity.
If the processing can be handled in a centralized fashion, then why not? It simplifies a lot of problems, like fault tolerance. Another benefit is that you could upgrade everything by simply swapping out a single compute module.
Stochastic - Monday, January 29, 2018 - link
It's kind of interesting how the evolution of self-driving tech might mirror the evolution of our own nervous system in some respects. Our visual stream has similar latency/bandwidth/computation tradeoffs. For instance, only our foveal vision is high acuity because the optic nerve simply doesn't have the bandwidth to transmit an entire visual field worth of high fidelity data. There are many other insights about information processing that can be gleaned from the mammalian nervous system.mode_13h - Tuesday, January 30, 2018 - link
Yeah, you're right. They should used foveal processing and then add motors to pan and tilt each of the car's cameras, like real eyeballs. That would surely be an improvement, plus it would sound cool to hear the car looking around it, all the time, and keep mechanics employed replacing all of those motors.And then, like real retinas, maybe they can add a blind spot to the sensors. It must be a good solution, because it's the one nature arrived at, and natural designs are always globally optimal and perfect, right? It's why humans are incapable of any perceptual or cognitive errors, so we should design computers to be exactly the same as us. The more limitations of biology we can faithfully reproduce in copper and silicon, the better they will surely be.
But why stop there? Maybe they could switch from 10 GbE to some sort of electro-chemical signalling mechanism, to make the car more natural and feel more alive. Increasing latency and decreasing bandwidth can only be a good thing, since it will make the car's network more like the nervous system of animals, which are the pinnacle of all design in the universe, regardless of biological and material limitations.
mlvols - Tuesday, January 30, 2018 - link
I think I sensed sarcasm, but maybe it's just my biological limitations playing tricks on me...Baub - Monday, January 29, 2018 - link
Firstly, I agree in pushing for higher bandwidth in all infrastructure. I have gigabit internet, and I wish is was more widely adopted across the internet. Now, does it seem like 3Gbps is a bit high for 1080p 30 at 8bit per channel? Seems like, 10Gbps isn’t even a drop in the bucket if you had 8 or 10 cameras going at that bandwidth. How much compression versus latency are we talking about? Smart phones can record a much lower bandwidth video in milliseconds 30 FPS has 33.3 milliseconds wait period, before the next frame has to be recorded. It just seems like the figures are inflated.Billy Tallis - Monday, January 29, 2018 - link
The bit rates in that slide are for uncompressed video. PCs operate in the far right column of 24 bits per pixel, 8 each for red, green, blue. If your computer vision system is monochromatic, then you may be able to get away with far fewer bits per pixel.Alternatively, you can read that table as indicating the bit rate when compression results in an average of eg. 8 bits per pixel, which would be 3:1 compression if starting with 24bpp RGB data.
N Zaljov - Tuesday, January 30, 2018 - link
This might be the stupidest question in this regard, but: Is there a reason not to use MIPI C-PHY for the sensors, apart from being a standard for embedded stuff (well, the sensors would actually count as embedded, but whatever...), instead of GbE?I‘m asking because there‘s not much info available (at least openly) about maximum wiring leghts, SNR, distortion in general etc. that could turn C-PHY into a worthless sucker for autonomous driving solutions.
Kevin G - Tuesday, January 30, 2018 - link
If a manufacturer wants really low latency and high quality imaging, they'll skip Ethernet entirely and just use SDI. No conversion latency from a camera and no network overhead. 12 Gbit bandwidth on copper is possible today with a planning group worksing on a 24 Gbit version.There is also the potential to move to twinaxial cabling to further reliabillity. Everything automotive seems be based on commodity specs but slightly modified (see HDMI Type-E).
mode_13h - Wednesday, January 31, 2018 - link
But Ethernet is *so* much more flexible than SDI. You can have all kinds of sensors - not just video - as well as control and multiple devices communicating with them.And does SDI support 16-bit or 20-bit HDR? I've seen truly HDR machine vision sensors that can output such formats.
Kevin G - Thursday, February 1, 2018 - link
Use the best tool for the job. If those other sensors don't need much bandwidth, then vanilla 100 Mbit/1 Gbit Ethernet or USB would be fine. SDI does make sense for the video portion without the need to build a 10 Gbit or faster Ethernet network into a car.As for SDI supporting 16 bit color per channel, yes. I've worked with such a system whose output was two a true 14 bit per channel LED system. Not sure off hand about 20 bit color but wouldn't surprise me.
mode_13h - Thursday, February 1, 2018 - link
So, you're going to run a whole spaghetti of different cables all over the car? Have you forgotten that Ethernet can be switched?And what about variable framerate? Can SDI support that? What about upstream control? What about peer-to-peer communication, in case something else on the network wants to talk to one of these sensors or vice versa?
Ethernet is the most general and flexible solution. It's commodity (okay, well 10 GbE isn't quite - but moreso than SDI), and there's a wide variety of parts and suppliers. So, when you say "use the best tool for the job", I think it'd be hard to do better than Ethernet.
As the saying goes: "Never bet against Ethernet."
careerz360 - Wednesday, August 1, 2018 - link
Find Latest <a href="https://www.careerz360.pk/jobs-in-quetta/>Lates... Quetta Jobs</a> at Careerz360. Search Online & Apply for All Jobs in Quetta across Top Companies. Submit your CV Today to Get Hired.Phisurgery - Wednesday, November 28, 2018 - link
Breasts are an important part of the female body. They are associated with a women’s femininity and confidence. As a female plastic surgeon, Dr. Perry can more easily understand your needs and goals. Dr. Perry Gdalevitch has an interest in breast surgery and is a specialist in:<a href="https://www.phisurgery.ca/>Breast augmentation</a>
Breast lift (mastopexy)
Breast lift and augmentation
Breast reduction
Breast reconstruction
Breast congenital deformities
Male breast reduction (gynecomastia)