Renesas Launches Cellular-to-Cloud IoT Development Platforms Powered by RA and RX MCU Families

Renesas launches Cellular-to-Cloud IoT development platforms powered by the RA and RX MCU families
MIT researchers developed two algorithms to solve the problem after discovering it. They show that algorithms reduce achievement gaps affecting underrepresented minorities using real-world data. The goal of the researchers is to ensure that while the performance for each subgroup improves with selective adjustment, the overall error rate of the model also increases. This risk is known as monotonic selective risk.

โ€œIt is challenging to create the right idea of โ€‹โ€‹justice for this particular problem. But by implementing this criterion, monotonic selective risk, we can make sure that the model’s performance is indeed better in all subgroups when you reduce the coverage,” said lead author Abhin Shah, an EECS graduate student.

To solve this problem, the team developed two neural network algorithms that enforce this fairness criterion. One method ensures that all data on the sensitive qualities of the dataset, race, and gender, are in the features used by the model to make predictions. Sensitive properties are properties that are often not used for decision-making due to laws or organizational policies. The second strategy uses a calibration technique that ensures that the model always predicts the same value for an input, regardless of whether sensitive characteristics are added to that input. They were able to reduce bias by achieving lower error rates for minority subgroups in each data set when they applied their algorithms to a joint learning method. selective change engine. Moreover, this was done without a significant impact on the overall error rate.