Microsoft Counterfit, an open source platform that can be used to measure the security of artificial intelligence (AI) and machine learning (ML) systems, has been released. According to Microsoft, the tool will determine if AI and ML algorithms are “robust, accurate, and trustworthy.” The Redmond-based firm claims to use the Counterfit internally to test its AI programmes for flaws before releasing them. On May 10, Microsoft will host a Counterfit walkthrough and live tutorial.
Counterfeit, according to a Microsoft blog post, is a method for securing AI systems used in a variety of sectors, including healthcare, banking, and defence. Microsoft claims that 25 out of 28 businesses don’t have the right tools to protect their AI applications, citing a study of 28 businesses ranging from Fortune 500 firms to corporations, non-profits, to small and medium-sized businesses (SMBs).
The blog post states, “Consumers must have faith that the AI systems powering these essential domains are free from adversarial exploitation.”
Microsoft claims it worked with a wide range of partners to validate Counterfit against their machine learning frameworks in their settings to ensure that it meets the needs of a wider range of security practitioners. Counterfit is also listed as a method for empowering engineers to design and deploy AI systems in a safe manner. Counterfeit is said to allow released attack algorithms available to the security community, in addition to providing workflows and vocabulary close to common offensive tools.
Microsoft has a Counterfit Git Hub repository which will host a walkthrough and live demo on May 10. You should apply for the webinar whether you are a developer or work for an organization that needs to use the tool to protect AI systems.