Sony is going through its most difficult time as a smartphone manufacturer. With only 400,000 units sold during the first quarter of 2020, a third compared to 2019, they are focused on stopping losses. However, there are two segments that still work very well for him: one is of course PlayStation – they just announced 110 million PS4 sold -, and the other, the one of photographic sensors. The latter now receive their dose of innovation, announcing from Japan the on-board AI sensors.

And it is that Sony manufactures a good part of the photographic sensors for mobile devices of all kinds. From the smartphones of its competition to drones like the new Mavic Air 2 or products of all kinds. But increasingly sensitive and larger sensors are no longer enough. In an era marked by computational photography and the possibilities that unfold, the iconic Japanese manufacturer enters fully with two sensors with artificial intelligence directly on them. Its about IMX 500, and its encapsulated version the IMX 501.

Sensors with artificial intelligence

The IMX 500 or IMX 501 is a “nothing more” sensor than 12.3 megapixels. A resolution that avoids number runs and the latest winding technologies, like pixel binning. It is also about a rather compact sensor, with a size of 1 / 2.3 inches, very far from the IMX 689 that we see in the OnePlus 8 Pro or Oppo Find X2 Pro and its 1 / 1.4 “.

Sony

In return, on the back of these AI or smart vision sensors, as Sony calls it, we find a tiny chipset capable of tasks that would otherwise be performed on the main SoC of the device in question. This sensor is intended more for light tasks, rather for IoT devices than for the best image quality in a camera.

The advantages of ‘edge computing’, directly on the sensor

Sony

Sony lists the advantages of a sensor of these characteristics, compared to current systems:

Latency: If the processing is done on the sensor itself, instead of on a dedicated processor or on a server in ‘the cloud’, the response time of the system will be less. In return, it will be more robust against data transmission cuts.

Security and privacy: with an embedded image treatment, only the most fundamental information escapes the sensor, once it has already been done by the algorithms.

Efficiency and cost: If we talk about text recognition, instead of transmitting an image with thousands or millions of pixels of resolution, we can send this same information in a few bits. This also avoids added costs in dedicated and external systems that skyrocket the price of facial recognition equipment, for example.

<

p> https://hipertextual.com/2018/02/amazongo-problemas-experimento-sociologico

Sony claims that onboard machine learning models can be selected from an already pre-loaded list for counting, prediction, measurements, heat maps or congestion analysis, among others. Also, these AI sensors can be updated with dedicated models developed by the developers who implement it. The house points for example to the product and customer recognition systems in supermarkets without staff, in the style of Amazon.

👇 More in Hypertextual

LEAVE A REPLY

Please enter your comment!
Please enter your name here