Apple is expected to launch its feature New Child Safety CSAM Later this year, specifically with the launch of iOS 15, but one of the developers said that they found the algorithm through which the feature works in older versions of the iPhone operating system and discovered a flaw and a technical flaw in this technology. So what is this defect and what was Apple's response?

A flaw has been discovered in Apple's anti-child abuse technology, and Apple responds


CSAM image analysis and scanning feature

A Reddit user says he discovered a version of the NeuralHash algorithm used to detect CSAM images within iOS 14.3.

Apple's response was: The extracted version is not up to date and will not be used.

The developer known as AsuharietYgvar was able to find NerualHash code buried inside iOS 14.3 and said that he reverse-engineered the code and reconstructed a working model in Python that can be tested by passing images of these codes.

The developer claims that this algorithm is not fooled by image compression or resizing but it cannot detect the process of cropping or rotating the image. Using the language of Python, users of the GitHub platform began examining how the algorithm works and whether it can be misused and exploited in a way that threatens iPhone owners. Abused and completely innocent photo but it deceives Apple codes and produces a match with CSAM codes, thus in theory a hacker could send these very natural looking photos as your device wallpaper or a normal landscape to Apple users to try to run the algorithm and trap them and cause them problems.


Not the final version

Despite these flaws, it can be said that the discovered algorithm is not the final version as Apple has been building the CSAM image detection algorithm for years so there are supposed to be some versions of the code for testing.

As for the possibility of creating fake photos and sending them to iPhone users to trap them, it is not that simple, because Apple does not reveal the photos unless they are synchronized with the cloud and do not forget the human review to ensure that the photos actually contain child abuse, all of this means that the attack that It is assumed to be true in theory but difficult in practice.


Apple responded

Apple said in a statement that the NeuralHash code discovered in iOS 14.3 is not the final version, also no one has discovered the existence of a CSAM database or matching algorithm in the iOS 15 beta version yet, but the feature is expected to be launched in its full version after the appearance of the system New launch iOS 15.

What do you think about Apple's new child safety feature, and is there a fear that it could be misused, tell us in the comments

Source:

vice

Related articles