Hi, I’m Allison, I work on strategy and research at Canopy. I was drawn to this company because of its mission to make the internet better. We often say “we’re doing it right this time” and our motto is “find the good,” but what is right and what is good requires a deep awareness of our societal landscape.
Silicon Valley has long relied on a mentality that they create (and own) the tools, that those tools are value neutral, and that progress is inevitable. But as we’ve seen the past couple of years, the future is unwritten. I’m excited to be able to dive deep into these ideas here at Canopy.
Following the Snowden leaks, we saw renewed interest in resisting corporate and governmental surveillance. I became interested in the political ramifications of technological decisions during my graduate work at NYU. It’s why I worked in Speculative Hardware at UNICEF, created a critical theory of technology program at the School for Poetic Computation, built anti-surveillance hardware at Eyebeam, taught many encryption workshops, and theorized on the impossible horizon.
I am fascinated by impossibility, who defines it, and why they’ve done so. Generally, something is only impossible until it isn’t, and the current inescapable data exchange process on the internet is ripe for, as they say, disruption.
Startup products are made within the framework of someone’s hypothesis for understanding the world. The current model of the internet relies on data accumulation, inexplicable data exchanges, and obfuscated privacy practices. So many of the problems we see in the data economy today are the result of a failure of imagination: creators did not ask big enough questions.
With this post, I want to highlight some of the questions we’re grappling with and the ways we’re going about addressing them. Some general markers for how I approach these problems: - Why is the world the way it is? - Why is it not a little (or a lot) different? - Who made the decisions years/decades ago that affect us today? - How are we currently shaping the future?
One thing we at Canopy have been tackling is the explainability of our model. We talk a lot about recourse and people being able to know what we (don’t) know about them and then being able to make decisions based upon that data. In our beta testing, we’re exploring how we communicate what the model (privately) knows about people so that they feel seen, get the recommendations they’re looking for, and complete a feedback loop that is healthy and not exploitative.
We're hard at work on some of the biggest questions in the industry: the data exchange model, imagining a better internet, explainability of AI, maintaining the privacy of our users, and we’re really excited to explore these concepts more with you in the coming weeks.
Interested in seeing some of this work in practice? Then sign up to join our iOS beta program and help us build a better internet. You’ll help us build out new features, catch bugs, and shape the future of private discovery.