AI ethics and transparency are vital for trust, accuracy and privacy, ensuring responsible innovation while mitigating bias and data risks.
Ethics and trust are crucial when applying AI to ensure that its development and deployment align with societal values, protect individual rights and foster responsible innovation. Large Language Models (LLMs) should be accountable for their outputs and avoid creating or reinforcing inequalities. Compared to other industries, the impact of AI inaccuracies in tourism may not appear to have significant ramifications. However, misleading information detracts from the visitor experience. For example, the risk of algorithms perpetuating stereotypes and incorrect cultural customs and Microsoft Start's gaffe of publishing an article promoting the Ottawa Food Bank as a top tourist attraction present clear examples of how conflicts between locals and tourists could escalate when AI goes wrong. These types of errors show how AI systems that disregard cultural sensitivities will damage brand reputation.
As everyday life is becoming increasingly reliant on AI, we must become accustomed to identifying risks and fact-checking AI-generated outputs. Transparency is a key component in building trust and helping users understand how AI models work. This includes explainability, which is especially important in generative AI to address bias, accuracy, fairness and trust. At the same time, users should maintain control over their personal data to avoid AI infringing on their privacy, while AI models should be built upon trusted standards. The risk is that with society becoming ever more divided on a range of issues, generative AI may spread unfounded claims that do not reflect the nuanced reality. This might include malicious reviews of travel experiences or incorrect claims regarding safety in a destination or the friendliness of locals. Research has recently revealed that almost 11% of reviews on Tripadvisor are likely AI-generated. Biased algorithms built upon inaccurate data will bolster discriminatory practices, skew recommendations and lead to price and product discrimination. Therefore, brands must use AI responsibly by assuring information accuracy and avoiding commercial bias that prioritises profit over authenticity.
The temporary ban of ChatGPT in Italy in April 2023 highlighted the importance of addressing privacy concerns and ensuring compliance with data protection regulations. This underscores the potential legal and reputational risks associated with AI if ethical principles and data accountability are not prioritised. If unaddressed, these concerns will only hinder confidence. As the number of AI models grows exponentially, we believe the degree of transparency will become the critical component in decision-making, joining price and accuracy, in helping organisations achieve long-term success by leveraging the right AI tools.
The emergence of DeepSeek, a Chinese AI company, sent global shockwaves, demonstrating that this technology is no longer solely the domain of established tech giants. DeepSeek's development of advanced language models, achieved with comparatively fewer resources, underscores how emerging markets are rapidly catching up to the pioneers of this advanced capability. As a "reasoning" model, DeepSeek's R1, mimics the human thought process of working through challenges and new ideas in a step-by-step approach, making outputs easy to understand for its users. Having adopted a strategy of releasing distilled versions of its models, DeepSeek empowers users to experiment across a wider range of devices and even create their own specialised "student" models derived from the original "teacher" model. By leveraging knowledge distillation, a form of model compression and knowledge transfer for large deep neural networks, a more compact and efficient model that retains the advanced capabilities of its larger counterpart can be produced. This approach, combined with DeepSeek's significantly lower memory usage, translates to reduced operational costs for users. This potent combination of high performance and affordability propelled DeepSeek to the top spot as the most-downloaded free app on Apple's App Store upon its US release. Receiving much less attention, Krutrim is positioning India at the forefront of AI research and the development of multimodal systems. With the emergence of these new challengers, among many others, the tourism industry must get to grips with the inner workings of different models.
However, these open-source approaches, while fostering collaborative innovation and democratising access to cutting-edge AI platforms, raise critical security considerations. This accessibility, while beneficial in many respects, allows for wider use and modification of the technology, potentially by actors with malicious intent. The possibility of exploiting vulnerabilities within the open-source code for harmful purposes becomes a tangible concern. DeepSeek's rapid rise has sparked international scrutiny due to unease about Chinese government censorship policies and data collection practices. This has led to regulatory actions in various countries, from Italy to Australia, highlighting the geopolitical dimensions of AI development. The company's research-focused approach, without immediately clear commercialisation strategies, creates additional uncertainty around long-term security protocols and the potential for unforeseen consequences as these powerful models evolve. This situation underscores a broader challenge: as generative AI becomes more accessible and its development diversifies beyond large corporations, ensuring responsible development, ethical deployment and robust security measures becomes increasingly complex and requires a global, multi-faceted approach.
With DeepSeek among the best performing AI models according to both traditional Massive Multitask Language Understanding tests and assessments conducted by researchers from Cardiff Metropolitan, Bristol and Cardiff universities, it should be considered among the market leaders. Even outperforming many of OpenAI's models, DeepSeek has clear potential in providing effective support for businesses. However, while it has been at the forefront of media attention due to geopolitical tension and the potential to be used as a tool for the Chinese government to collect data, the issue of data privacy is much broader. OpenAI, for example, has faced numerous lawsuits around the potentially illegal scraping of articles and books to train its models.
With algorithmic bias and data privacy as core concerns, DeepSeek's widespread application appears tenuous, especially in light of the additional context of mistrust around Chinese-owned companies, such as TikTok and Huawei. Nevertheless, given the disclaimers from the major AI developers that conversations are processed to improve the training of their models, generative AI's use opens up new challenges in improving efficiency when also needing to manage confidential information. With only partial support for an international agreement on AI development at the recent AI Action Summit in Paris — after resistance from the USA and UK to increased regulation and national security concerns, respectively — this is an issue that is unlikely to be solved anytime soon. In fact, it will probably only become a more substantial challenge as AI is embedded into all operational workflows.
Rather than despairing at this sombre reality, there are many opportunities to be innovative in the practical application of AI. Solutions such as LM Studio and Ollama may provide an answer. Opening up choice, these apps enable users to run LLMs offline. All activity is stored locally, with no data processed through servers. With open-source AI models becoming increasingly common, such approaches can enhance the sophistication of AI usage by facilitating privacy and eliminating the risk of accidental or malicious data breaches. Going beyond the privacy angle, such applications also often allow Retrieval Augmented Generation (RAG), linking documents with AI searches. Instead of relying on static, pre-trained LLM data, RAG systems facilitate access to up-to-date information, significantly expanding the relevance of LLM outputs. This prompt engineering of the LLM provides additional context that helps to minimise AI hallucinations and potential contradictions, resulting in a more reliable and satisfying user experience.
While defining clear strategic approaches to AI implementation is not easy, with a pragmatic approach and determination, effective solutions can be found. The ability to choose models and privately interact with pre-trained AI models will open up the ability for tourism businesses to leverage access to model catalogues and benefit from using multiple AI assistants for specialised tasks without the need for numerous subscriptions. This enhanced transparency and the derived cost efficiencies will be key to augmenting experimentation and driving future success as a direct result of open-source AI models.
While we believe that open-source AI applications will be an enabler and drive progress, some challenges remain. As generative AI tools are embedded in almost every software and digital platform, these very integrations could pose risks. While they are designed to provide an extra layer of support, businesses may unwittingly share private information through these tools. With the inability to resort to offline interactions on these crucial systems, employees may incorrectly believe that data would not be stored beyond the organisation's account or be ingested by the AI model to inform future training. Without clear policies and staff training on generative AI usage, data protection may inadvertently be contravened. The rapid development of AI may require more agile approaches to training, requiring learning and development to be a continuous and iterative process that adapts alongside technological developments.
Despite the remaining challenges, the future is promising. At the Digital Tourism Think Tank, we expect that the open-sourced nature of LLMs will eventually migrate into the very nature of AI integrations, providing choice and enhanced transparency across all digital touchpoints. With such functionalities already available for online searches through Perplexity, enabling users to switch between OpenAI's GPT-4 Omni, Anthropic's Claude 3 Sonnet and Haiku and Peplexity's own Sonar Large 32k model based on Meta's Llama 3.4, this future may not be far away.
Ethics and trust are crucial when applying AI to ensure that its development and deployment align with societal values, protect individual rights and foster responsible innovation. Large Language Models (LLMs) should be accountable for their outputs and avoid creating or reinforcing inequalities. Compared to other industries, the impact of AI inaccuracies in tourism may not appear to have significant ramifications. However, misleading information detracts from the visitor experience. For example, the risk of algorithms perpetuating stereotypes and incorrect cultural customs and Microsoft Start's gaffe of publishing an article promoting the Ottawa Food Bank as a top tourist attraction present clear examples of how conflicts between locals and tourists could escalate when AI goes wrong. These types of errors show how AI systems that disregard cultural sensitivities will damage brand reputation.
As everyday life is becoming increasingly reliant on AI, we must become accustomed to identifying risks and fact-checking AI-generated outputs. Transparency is a key component in building trust and helping users understand how AI models work. This includes explainability, which is especially important in generative AI to address bias, accuracy, fairness and trust. At the same time, users should maintain control over their personal data to avoid AI infringing on their privacy, while AI models should be built upon trusted standards. The risk is that with society becoming ever more divided on a range of issues, generative AI may spread unfounded claims that do not reflect the nuanced reality. This might include malicious reviews of travel experiences or incorrect claims regarding safety in a destination or the friendliness of locals. Research has recently revealed that almost 11% of reviews on Tripadvisor are likely AI-generated. Biased algorithms built upon inaccurate data will bolster discriminatory practices, skew recommendations and lead to price and product discrimination. Therefore, brands must use AI responsibly by assuring information accuracy and avoiding commercial bias that prioritises profit over authenticity.
The temporary ban of ChatGPT in Italy in April 2023 highlighted the importance of addressing privacy concerns and ensuring compliance with data protection regulations. This underscores the potential legal and reputational risks associated with AI if ethical principles and data accountability are not prioritised. If unaddressed, these concerns will only hinder confidence. As the number of AI models grows exponentially, we believe the degree of transparency will become the critical component in decision-making, joining price and accuracy, in helping organisations achieve long-term success by leveraging the right AI tools.
The emergence of DeepSeek, a Chinese AI company, sent global shockwaves, demonstrating that this technology is no longer solely the domain of established tech giants. DeepSeek's development of advanced language models, achieved with comparatively fewer resources, underscores how emerging markets are rapidly catching up to the pioneers of this advanced capability. As a "reasoning" model, DeepSeek's R1, mimics the human thought process of working through challenges and new ideas in a step-by-step approach, making outputs easy to understand for its users. Having adopted a strategy of releasing distilled versions of its models, DeepSeek empowers users to experiment across a wider range of devices and even create their own specialised "student" models derived from the original "teacher" model. By leveraging knowledge distillation, a form of model compression and knowledge transfer for large deep neural networks, a more compact and efficient model that retains the advanced capabilities of its larger counterpart can be produced. This approach, combined with DeepSeek's significantly lower memory usage, translates to reduced operational costs for users. This potent combination of high performance and affordability propelled DeepSeek to the top spot as the most-downloaded free app on Apple's App Store upon its US release. Receiving much less attention, Krutrim is positioning India at the forefront of AI research and the development of multimodal systems. With the emergence of these new challengers, among many others, the tourism industry must get to grips with the inner workings of different models.
However, these open-source approaches, while fostering collaborative innovation and democratising access to cutting-edge AI platforms, raise critical security considerations. This accessibility, while beneficial in many respects, allows for wider use and modification of the technology, potentially by actors with malicious intent. The possibility of exploiting vulnerabilities within the open-source code for harmful purposes becomes a tangible concern. DeepSeek's rapid rise has sparked international scrutiny due to unease about Chinese government censorship policies and data collection practices. This has led to regulatory actions in various countries, from Italy to Australia, highlighting the geopolitical dimensions of AI development. The company's research-focused approach, without immediately clear commercialisation strategies, creates additional uncertainty around long-term security protocols and the potential for unforeseen consequences as these powerful models evolve. This situation underscores a broader challenge: as generative AI becomes more accessible and its development diversifies beyond large corporations, ensuring responsible development, ethical deployment and robust security measures becomes increasingly complex and requires a global, multi-faceted approach.
With DeepSeek among the best performing AI models according to both traditional Massive Multitask Language Understanding tests and assessments conducted by researchers from Cardiff Metropolitan, Bristol and Cardiff universities, it should be considered among the market leaders. Even outperforming many of OpenAI's models, DeepSeek has clear potential in providing effective support for businesses. However, while it has been at the forefront of media attention due to geopolitical tension and the potential to be used as a tool for the Chinese government to collect data, the issue of data privacy is much broader. OpenAI, for example, has faced numerous lawsuits around the potentially illegal scraping of articles and books to train its models.
With algorithmic bias and data privacy as core concerns, DeepSeek's widespread application appears tenuous, especially in light of the additional context of mistrust around Chinese-owned companies, such as TikTok and Huawei. Nevertheless, given the disclaimers from the major AI developers that conversations are processed to improve the training of their models, generative AI's use opens up new challenges in improving efficiency when also needing to manage confidential information. With only partial support for an international agreement on AI development at the recent AI Action Summit in Paris — after resistance from the USA and UK to increased regulation and national security concerns, respectively — this is an issue that is unlikely to be solved anytime soon. In fact, it will probably only become a more substantial challenge as AI is embedded into all operational workflows.
Rather than despairing at this sombre reality, there are many opportunities to be innovative in the practical application of AI. Solutions such as LM Studio and Ollama may provide an answer. Opening up choice, these apps enable users to run LLMs offline. All activity is stored locally, with no data processed through servers. With open-source AI models becoming increasingly common, such approaches can enhance the sophistication of AI usage by facilitating privacy and eliminating the risk of accidental or malicious data breaches. Going beyond the privacy angle, such applications also often allow Retrieval Augmented Generation (RAG), linking documents with AI searches. Instead of relying on static, pre-trained LLM data, RAG systems facilitate access to up-to-date information, significantly expanding the relevance of LLM outputs. This prompt engineering of the LLM provides additional context that helps to minimise AI hallucinations and potential contradictions, resulting in a more reliable and satisfying user experience.
While defining clear strategic approaches to AI implementation is not easy, with a pragmatic approach and determination, effective solutions can be found. The ability to choose models and privately interact with pre-trained AI models will open up the ability for tourism businesses to leverage access to model catalogues and benefit from using multiple AI assistants for specialised tasks without the need for numerous subscriptions. This enhanced transparency and the derived cost efficiencies will be key to augmenting experimentation and driving future success as a direct result of open-source AI models.
While we believe that open-source AI applications will be an enabler and drive progress, some challenges remain. As generative AI tools are embedded in almost every software and digital platform, these very integrations could pose risks. While they are designed to provide an extra layer of support, businesses may unwittingly share private information through these tools. With the inability to resort to offline interactions on these crucial systems, employees may incorrectly believe that data would not be stored beyond the organisation's account or be ingested by the AI model to inform future training. Without clear policies and staff training on generative AI usage, data protection may inadvertently be contravened. The rapid development of AI may require more agile approaches to training, requiring learning and development to be a continuous and iterative process that adapts alongside technological developments.
Despite the remaining challenges, the future is promising. At the Digital Tourism Think Tank, we expect that the open-sourced nature of LLMs will eventually migrate into the very nature of AI integrations, providing choice and enhanced transparency across all digital touchpoints. With such functionalities already available for online searches through Perplexity, enabling users to switch between OpenAI's GPT-4 Omni, Anthropic's Claude 3 Sonnet and Haiku and Peplexity's own Sonar Large 32k model based on Meta's Llama 3.4, this future may not be far away.