We are hiring! Read more: EU Executive Director

Back to All Updates
18 August 2023

The risks and rewards of AI in the housing system

From Google Street View mortgage assessors to VR planning applications and open source renters’ rights tools, AI is changing the way we manage homes and buildings.
The risks and rewards of AI in the housing system
By Martha Dillon

August 18, 2023 

From Google Street View mortgage assessors to VR planning applications and open source renters’ rights tools, AI is changing the way we manage homes and buildings. There’s a very real opportunity for some of these technologies to improve our lives – but it could also accelerate the financialisation of our homes. The government urgently needs to take control of this situation to avoid AI worsening the UK’s housing crisis. 

The architecture world is ablaze with AI. Architects have leaped on the sensational launch of text-to-image softwares like Dall-E, Midjourney and Stable Diffusion, dreaming up buildings made from rubbish, inflatable cathedrals and structures inspired by crumpled paper. But, argues a recent Guardian article, beyond the fanciful renderings, AI is actually relatively well established behind the scenes. Architecture firms are working with AI tools to generate ideas, improve their design tools, and develop more detailed plans. It is, some say, an opportunity to speed up and streamline design processes.

As critics have made clear, the jury is out as to whether the designs that AI speeds up are any good. But the opportunities for time saving are significant, as it can take as long to plan, design and get approval for a construction project as to build it. This could, theoretically, make it quicker to bring home building projects to fruition, a key frustration of many housing campaigners. But, just as building more homes isn’t the sole solution to the housing crisis, AI is disrupting housing in more ways than one. 

One of the emerging areas of housing AI is technology to speed up purchasing and selling homes. Entera uses AI-led analytics to help investors find, buy and operate homes. Real estate firm Zillow launched a machine learning algorithm to estimate the value of homes as early as 2006, and in 2018 launched an ‘iBuyer’ platform called ‘Offers’ (now collapsed), where people could sell their home to Zillow directly, at an algorithmically-derived price. The MIT Senseable City Lab has designed a predictive model that values homes solely using images, including Google Street View photos. This ostensibly removes both bias and time from the normal valuation process: reducing the bureaucracy associated with real estate would be welcome news for many (and could reduce brokerage fees). But none of these softwares solve the deep rooted problems of our homes being too expensive, in poor condition, and fossil fuel-hungry. 

In fact, they probably make it worse. Our research indicates that house prices have been inflated in the last decades in large part because they have been turned into assets to be bought and traded. Since the 1950s, the UK government has encouraged people to buy their own homes as a financial investment and to store wealth. In the UK, this was enabled by relaxed regulations increasing the availability of cheap mortgage credit, new policies to encourage investors to put money into homes and a lack of genuinely affordable alternatives (like social homes) forcing people to buy into the new regime – house prices inevitably surged. AI platforms to help speed up housing transactions leverage the mentality that homes are assets to be bought and sold. They aim to speed up the transaction process such that a home can be sold with a ‘click’, like a stock or share, and automate valuation processes to maximise profits. This vision reduces the act of buying a home to ruthlessly efficient data analytics, primed to profit from locations where house prices are rising, even though the AI and its owner has made no productive contribution to the area. “Predictions could turn into prophecies” warns one of the creators of the Senseable City Lab model. “Imagine a dystopian future where everyone repaints their house a specific colour to game the system and impress the Google Street View Car.”

Another risk is the basis on which AI iBuyers makes decisions. AI ‘learns’ by analysing existing data, websites and documents and all the prejudices they contain, so it is plagued by evidence of in-built bias, racism and discrimination (not to mention poor treatment of workers involved in training some AI models to begin with). This threatens to exacerbate pre-existing issues of inequity and oppression in the housing system. In the US, this has already proven true in the case of mortgages. Recent research has found that machine learning models tasked with recommending mortgage approval decisions not only amplified historic patterns of black applicants being less likely to be approved, but left all applicants worse off when ‘fairness’ corrections were applied. While UK mortgage approvals are not known to be so actively discriminatory as in the US, Black and Brown people have long complained of struggling to access loans and mortgage credit. Our research finds that households recorded in the 2021 Census as being mixed race, Pakistani, Bangladeshi, Chinese, Black, Roma and Arab were all significantly less likely to own a home than the national average, with this pattern worsening, not improving, for many of the younger generation. These patterns won’t change if AI continues to make decisions based on the current criteria, expectations and trends. And sometimes, the prejudice is seemingly random – a recent FT column reports that: “when Sarah Bell, an AI specialist… asked ChatGPT to evaluate the desirability of different tenants based on nationality, she discovered that the python computing code inside the bot was profoundly prejudiced against Australians… it was impossible to determine exactly why it disliked Australians, because the system was a black box.”

The risk of discrimination through AI is even higher, and more insidious, for renters. In the UK, renters face discrimination from the point of requesting viewings; Right to Rent requires landlords to deny lodgings to people without legal status to live in the UK, with 51% of landlords consequently less likely to even consider renting to foreign nationals from outside of the EU. In the US, as with mortgages, AI used by landlords to screen tenants is already discriminatory, routinely running background checks that rule out people with minor criminal convictions, disabilities and eviction records. But the additional risk to renters is of heightened surveillance once within a home. AI advocates argue that a crucial use for AI is to manage networks of sensors – to optimise energy use, monitor patterns of activity, and detect the need for repairs. ‘AI will also resolve access issues through the ability to monitor homes inside and out’ enthuses the CEO of UK housing tech company Fuzzlab, ‘[it will] recognise and allow access to certain individuals and even create zones within a property that individuals have access to while restricting other zones.’ This surveillance tech is, quite evidently, ripe for abuse.

One area where AI is emerging with more promise is in planning. There are many core decisions related to planning that AI cannot solve – councils selling off social homes, building regulations that accept socially and ecologically damaging new projects, and developer speculation and land banking practices, to name a few. But, there are some that it could. 

The Turing Institute and RIBA have both suggested that AI can help simplify the technical aspects of planning approvals processes, enabling councils to regain control of a vital but complex, underfunded and overloaded system, and applicants to more easily submit requests. The current planning system is deeply bureaucratic, requiring officers to process lengthy documents and detailed plans in a matter of minutes or hours. According to research by the Turing Institute, around 80% of rejected planning applications in the UK are related to 12 common technical mistakes, rather than any deep issues with the projects. Permitted Development applications are technical applications only, requiring no political decision. Automating some of these processes would provide greater scrutiny of the technical components of applications, enforcing stricter compliance with building regulations and local policies. Officers would then have more time to delve into political questions and check whether the project aligns with wider local priorities. AI tools could also give councils greater oversight of early stage developer applications. Blocktype is an AI-powered tool that ‘give[s] you a ballpark sense of what’s possible on a site, providing sketch layouts and viability appraisals’. This could reduce developer land speculation by helping councils understand the limits and possibilities of a given site, pushing back on overpromising and requiring developers to be more ambitious in terms of delivering community benefits.

AI also has immense possibilities in coordinating and improving public access to housing data across cities and regions. Industry group Future Cities Catapult has been working with architecture firm Hawkins\Brown on an AI tool that will allow passers-by to use their phones to see virtual design proposals for vacant sites, increasing participation in local planning decisions. Housing justice organisations like JustFix have been building accessible digital tools to help renters assert their rights, make real estate data easy to understand and analyse urban trends to campaign for better housing policies. Sustainability engineers advocate for digitised building passports and logbooks that track environmental data to support the reuse and retrofit of buildings. Again, none of these are a silver bullet to the problems in the housing system. But they could go some way to restoring democratic control of the housing system and accelerating ecologically safe practices (although it should also be recognised that AI’s own cataclysmic environmental footprint urgently needs tackling).

The success of even the more promising applications of AI is, of course, contingent on who owns the algorithm and how it is designed and managed. ‘Given their collaborative and networked nature’ argues think tank Common Wealth, ‘[digital platforms] have great potential to be organised through multi-stakeholder models of governance and ownership, giving suppliers and users of the platform genuine voice and control.’ Digital platforms like Just Fix, and the people-oriented initiatives to simplify and democratise planning described above, could genuinely help to manage our homes more effectively, fairly and transparently. They could be a part of what writer Ben Tarnoff calls the ‘third wave of algorithmic accountability’: a flourishing world of collectively governed social media sites, worker-owned app-based services, and publicly and cooperatively owned broadband networks. 

But, as Common Wealth notes, major digital platform companies ‘have become the robber barons and rentier giants of our age. Their main focus has become the collection of [economic] rent while fending off potential competitors and swatting away regulations and public policies aimed at curtailing their power.’ The iBuyers and the AI real estate data monitoring systems signal an acceleration of these big, data-rich tech platforms in the housing space. In a sector that is characterised by a, quite literal, battle for ownership, controlling social homes seems to be appearing as the next step in their expansion.

Private interest in social housing is rising rapidly. According to Savills, for-profit providers now own more than 28,150 ‘affordable’ homes – a growth of 35% since March 2022. At the same time, pundits are arguing that AI should provide customer service bots for council housing residents, monitor internal sensors, and control maintenance cycles, applications all already used by Bradford-based Manningham Housing Association. AI company Pivigo – working with social housing providers Peabody, London Borough of Camden, Community Housing and Cobalt – uses machine learning to help ‘maximise rent collection, identify tenants most in need of support and transform productivity.’ 

With appropriate privacy controls and non-digital alternatives for residents who can’t get online, most of these applications may seem relatively innocuous. But there is a real risk that social housing residents will increasingly be confronted by unregulated technologies which we know are often discriminatory and could be capable of running invasive background checks. Sensor and maintenance monitoring companies are setting the ground for more detailed surveillance of peoples’ activities. This may also have implications for the long-term management of social homes. Many councils justify decisions to demolish council estates and sell off the land to private developers using dubious arguments that buildings have fallen into disrepair. Handing detailed condition data to real estate companies primes them to play an outsized role in contentious, profitable battles over social housing demolition and regeneration plans. With the proportion of social homes in the UK at record lows, the growing power of data-hungry, poorly regulated and profit-seeking technology companies sets alarm bells ringing.

The risk of AI being used to increase profit-seeking in the housing system is a strong reminder that the real solutions for the housing crisis lies in the de-financialisation of our homes, the need for deep investment in social and community-owned homes and better rights for renters. But it also flags that unlocking the useful applications of AI requires more than just a weak set of regulations designed by private companies with vested interests in these technologies. 

As groups like Common Wealth have described, we need new, democratic and multi-stakeholder organisations and approaches to AI models and softwares, with strong antitrust protections in place to control Big Tech platforms. We need in-person alternatives to all AI-run systems that provide essential services for those who cannot access digital technologies. All user data running through AI models must be managed with strong privacy protections, including anonymisation of all data – except in cases where there is a clear benefit to the individual, in which case it should be opt-in. AI used for such socially central functions like house price tracking, planning and social housing management should be owned and managed entirely by public or community-led bodies, or required to report into a central system such that the data remains in the public domain. Labour laws must ensure work organised or affected by AI intermediaries remains secure and decent; AI in planning should help council officers scrutinise applications in more detail, not replace them. 

AI is changing our housing system, but making sure that it is useful for our homes, rather than accelerating our current problems, will require concerted effort, both by the public bodies that create new housing policies and the private companies designing AI linked to our homes.

If you haven’t yet, sign-up to our mailing list for more regular updates, or donate to support our work fighting for a money and banking system that enables a fair, sustainable and democratic economy. 

You might also like

Get the latest campaign updates