Amid the debate now going on around artificial intelligence attitudes appear to vary from ‘relaxed’, in which most of what’s currently called AI consists largely of rebranded data functions, to ‘concerned’, for whom AI is already among us, seeping into everyday activities, and on the point of creating hard-to-reverse, if not irreversible, socio-economic change. Among the latter group the feeling is that regulation is needed to ensure that AI is developed in ways that provides the widest possible benefits while reducing the risks. For the ‘relaxed’, the solution is the opposite – that effective AI depends on huge data flows and that most of the current models of regulation militate against this. Though GDPR is often quoted in this context, the latest IIC report on AI in the Asia-Pacific region* reveals that many Asian countries, including Indonesia and South Korea, employ data regulations that either require personal consent or localisation. In either case the burden on companies is time-consuming and expensive, and likely to impede AI innovations.
But the report points out that Governments do not have to choose between enabling the flow of data across borders and upholding privacy and security principles. ‘Several of the highest-ranking economies have implemented – or are looking to implement – regulations that structure cross-border data flows in a more balanced, nuanced, and targeted manner’. The report looks particularly at Australia, which takes a ‘risk-based’ approach based to data classification. The risk is assessed principally on the impact to the national interest that would arise if information were to be compromised. To ensure clarity, it uses as few tiers as possible (currently three). Risk profiles and security controls are then used appropriately to manage the data, but the effect is to make much more data available than would otherwise be the case.
However relaxed or concerned one might be, evidence suggests a risk-based approach to data could be something on which everyone can agree.
*If you would like to receive the full report, please contact email@example.com
A more nuanced approach to privacy regulation will be needed to release the power of AI. Amid the debate now going on around artificial intelligence attitudes appear to vary from ‘relaxed’, in which most of what’s currently called AI consists largely of rebranded data functions, to ‘concerned’, for whom AI is already among us, seeping into everyday activities, and on the point of creating hard-to-reverse, if not irreversible, socio-economic change.
We give innovators and regulators a forum in which to explore, debate and agree the best policies and regulatory frameworks for widest societal benefit.
Insight: Exchange: Influence
We give members a voice through conferences, symposiums and private meetings, as well as broad exposure of their differing viewpoints through articles, reports and interviews.
The new website will make it easier for you to gather fresh insights, exchange views with others and have a voice in the debateTake a look Learn more about our updates
You are seeing this because you are using a browser that is not supported. The International Institute of Communications website is built using modern technology and standards. We recommend upgrading your browser with one of the following to properly view our website:Windows
Please note that this is not an exhaustive list of browsers. We also do not intend to recommend a particular manufacturer's browser over another's; only to suggest upgrading to a browser version that is compliant with current standards to give you the best and most secure browsing experience.