Maintaining privacy with AAC
Updated: Mar 29
The ability to analyse a users language is a valuable thing. It can improve language predictions and understanding for both the user and others, but this comes with the risk of decreasing the amount of privacy available to the user. Of course, privacy should be the utmost priority; an AAC user should not have less privacy rights than a non-user, simply because of the technology which aids them.
The complications here are intricate and important. One level of this is informed consent. To truly consent to data collection, a user needs to understand exactly what is being collected, how, and the implications of this. A solution would be to collect no data at all, but the advantages of collecting language samples are considerable. Of course, the aim of language collection is a benevolent one: to optimise AAC use, but this could lead to a slippery slope of data collection with the end result being a complete lack of privacy for such users. It's important that there is a constant awareness of this risk, and a constant consideration of what AAC users actually need.
With this in mind, it seems that there is a tug of war between trying to allow AAC users the highest quality of communication possible, whilst also maintaining their privacy. Even if data is anonymised, records of conversations still may exist in some capacity. This can be a concern for some users, especially if sensitive data is involved.
If you've used Predictable, you'll know that when a phrase is spoken, it gets saved under the History section, so something can be easily repeated. However, this data is not used in any other capacity, and can be deleted by the user at any time. It is not associated with any conversation or point in time; it is simply a list of phrases sorted by recency. This is an important distinction. No AAC user should have to worry about the analysis or recording of their words more than any other speaking person.
But what about other data? For starters, user data is essential for us, including data that isn't related to language. Whether we're analysing user reviews, collecting feedback, or answering support queries, we're collecting data to help us constantly improve our apps, and facilitate and promote ease of AAC use. By monitoring this generalised information, we can learn which areas to improve in, which features work well, and what isn't really needed. This method allows us to use data to improve without risking anyone's privacy, doing out best for our users while maintaining integrity and anonymity, a valuable asset in our ever more digitalised world.