1. Help Center
  2. Values and Philosophy

Hill Management Group (WeRX Brands) Policy on the Use of Artificial Intelligence

Hill Management Group has endorsed the following Policy on the Use of AI in our company to ensure consistency, transparency, and trust within our business community.

Purpose

This policy aims to guide the responsible and ethical use of generative AI tools within Hill Management Group | The WeRX Brands (HMG) to enhance productivity, creativity, and innovation while maintaining the highest standards of quality, accuracy, and brand integrity.

Scope

This policy applies to all employees, contractors, and any other individuals using generative AI tools on behalf of HMG. It covers all forms of generative AI, including text generation, image generation, code generation, and any other AI-powered content creation tools.

Key Terms and Definitions

  1. AI-Generated Content: Any output produced by a generative AI tool, whether a draft, outline, finished product, or visual element.
  2. Brand Voice: The distinct personality, tone, and style of communication that represents a company or brand.
  3. Copyright Infringement: The unauthorized use of copyrighted material in a way that violates the copyright owner's exclusive rights, such as reproducing, distributing, or publicly displaying the work.
  4. Data Privacy Regulations (e.g., GDPR, CCPA): Laws that govern the collection, use, and disclosure of personal information.
  5. Draft: A preliminary version of a piece of content, typically generated by a human or AI, that requires further review, editing, and refinement before it is considered final.
  6. Final Output: Content that is considered complete and ready for its intended use, whether internal or external.
  7. Generative AI: Artificial intelligence systems capable of creating new content, including text, images, music, code, or other forms of media, often in response to prompts or instructions.
  8. Hallucination (in AI): The tendency of AI models to generate outputs that are factually incorrect, nonsensical, or unrelated to the given input.
  9. Intellectual Property (IP): Creations of the mind, such as inventions; literary and artistic works; designs; and symbols, names and images used in commerce.
  10. LMM: Large Language Model. A type of artificial intelligence model trained on massive datasets of text and code that can generate text, translate languages, write different kinds of creative content, and answer questions in an informative way.
  11. Outline: A structured plan or framework for a piece of content, often including main points, subtopics, and a general flow of ideas. Outlines are typically used to guide the creation of drafts.
  12. Plagiarism: The act of using someone else's work, ideas, or words without giving them proper credit, whether intentional or unintentional.
  13. Sensitive Content: Any information that is confidential, proprietary, regulated, or that could be harmful or offensive if misused or disclosed. This could include personal data, financial information, trade secrets, or content related to controversial topics.

Principles

  • Human-Centric Approach: We believe the best use of Generative AI is to augment human capabilities, not replace them. Human judgment, creativity, and expertise remain essential in all aspects of our work.
  • Quality and Accuracy: All content, whether AI-generated or not, must meet our company's high standards for quality, accuracy, and relevance. Fact-checking and verification are mandatory.
  • Ethical Use: Generative AI must be used responsibly and ethically, respecting copyright laws, intellectual property rights, and avoiding any form of plagiarism or misrepresentation.
  • Transparency: When AI is used to generate content, it should be clearly disclosed to relevant stakeholders, including clients, colleagues, and the public, as appropriate.
  • Brand Voice and Consistency: AI-generated content must align with our brand voice, style guidelines, and messaging principles.

General Guidelines for Use

  • Approved Tools: Employees of HMG are to only use generative AI tools that have been approved by the HMG management team.
  • Outlines and Ideation: AI can be a valuable tool for generating initial checklists, brainstorming ideas, summarizing information, condensing to meta content for SEO purposes, or for outlining content structures.
  • Editing and Refinement: All AI-generated content must be thoroughly reviewed, edited, and refined by a human expert to ensure accuracy, clarity, originality, and alignment with our brand voice.
  • Sensitive Content: All employees of HMG and contractors doing work on behalf of HMG must exercise caution when using AI for sensitive topics or content that requires nuanced understanding or cultural sensitivity.
  • Data Privacy: Be mindful of data privacy regulations and company policies when using AI tools that may process or store sensitive information.
  • Copyright and Plagiarism: Never use AI-generated content without proper attribution or permission, and always verify that it does not infringe on any copyright or intellectual property rights. If you cannot verify that an AI-generated content is free from copyright infringement, it cannot be used.

AI-Generated Content for Publication

HMG does not allow AI-generated content to be used as final output for publication. There are four major reasons for this policy, including:

  • Plagiarism Risk: Because AI models are trained on vast, often opaque, datasets, they can generate content that directly copies or closely resembles existing work, leading to plagiarism.
  • Accuracy Concerns: AI models can produce inaccurate information, especially on complex or rapidly evolving topics. Human fact-checking is mandatory.
  • Legal and Ethical Considerations: The legal landscape surrounding AI-generated content is still evolving. Using AI output as final could violate IP rights belonging to someone else and/or expose the company to legal risks.
  • Brand Voice: AI struggles to fully capture the unique nuances, tone, and style that define our brand voice. Human refinement is necessary to ensure consistency and authenticity.

AI-Generated Content Not for Publication

In specific scenarios, HMG permits the use of AI-generated content as final output for internal purposes or in cases where the content will not be published. These scenarios may include:

  • Internal Training Materials: AI can efficiently generate training documents, presentations, or quizzes for employees, reducing the workload on human resources.
  • Process Documentation: AI can assist in creating standard operating procedures, technical guides, or workflow diagrams, streamlining internal knowledge sharing.
  • Strategic, Operational, or Marketing Planning Documents: AI may be used to generate initial drafts of internal strategy documents, reports, profiles, or presentations.

Why the Standard Risks Do Not Apply (As Strongly) to Internal Content

  • Reduced Plagiarism Risk: While AI-generated content still needs review, the risk of unintentional plagiarism is lower when the content is not intended for public consumption.
  • Lower Accuracy Stakes: Errors or inaccuracies in internal documents are less likely to cause significant harm compared to published materials. However, accuracy remains important for effective decision-making.
  • Limited Legal Exposure: The legal implications of AI-generated content are less concerning for internal documents, which are less likely to be subject to copyright or intellectual property disputes. Nevertheless, proper attribution of content quoted from other sources is an expected behavior at HMG, even for internal documents.
  • Brand Voice Flexibility: While maintaining brand voice is always valuable, there is more leeway in internal materials, which are typically not written or vetted by the marketing department.

Important Note: Even when AI-generated content is used for internal purposes, it is important to adhere to all other aspects of this policy, including:

  • Human Review and Refinement: All AI-generated content, regardless of its purpose, must be thoroughly reviewed and refined by a human expert to ensure accuracy, clarity, proper attribution, and appropriateness.
  • Ethical Considerations: Ensure that AI-generated content does not perpetuate bias, discrimination, or misinformation.
  • Data Privacy: Protect sensitive company information and adhere to all data privacy regulations when using AI tools.

Approval Process: The use of AI-generated content as final output for internal purposes or non-publishable purposes should be approved by the relevant department head or supervisor.

Functional Area-Specific Guidelines for AI Use

Text Content Creation

  • Brainstorming and Ideation: Use AI to generate creative ideas for campaigns, slogans, taglines, social media posts, blog topics, or email subject lines.
  • Outlining: Leverage AI to create outlines for marketing copy, press releases, website content, social media captions, or ad copy. Remember, these are just starting points; human refinement is mandatory.
  • Content Optimization: Use AI-powered tools to analyze content for SEO effectiveness, readability, and engagement potential. Incorporate suggestions, but maintain our brand voice.
  • A/B Testing: Employ AI to analyze A/B test results and identify patterns or trends to optimize marketing campaigns and strategies.

AI-Generated Text Drafts are Not Allowed

The use of AI for creating drafts, even first drafts, of final content is not permitted at HMG. This is for the following reasons:

  • Over-Reliance on Training Data: LMMs are trained on massive datasets of existing text and code. Even with "tweaking," the risk remains that the generated draft could too closely resemble the original source material, leading to unintentional plagiarism or copyright issues.
  • Legal Gray Areas: The legal landscape around AI-generated content is still evolving. While minor modifications might seem like enough to avoid plagiarism, there is no guarantee that courts will agree.
  • Reputation Damage: Accusations of plagiarism, even if unintentional, can severely damage our reputation for originality and integrity.
  • Generic Output: AI models often produce generic or formulaic drafts that lack the unique voice, tone, and style that differentiate our brand (or our customer’s brands). Significant rewriting will likely be needed to align the content with brand guidelines.
  • Time Investment: AI-generated drafts are likely to require extensive editing to match the desired brand. It is more efficient to start from an AI-generated outline, after which a human writer who intrinsically understands the brand voice will create all drafts.
  • Hallucination: LMMs can "hallucinate," or generate information that is factually incorrect or misleading. Even with careful editing, it is easy to miss these inaccuracies, which could have serious consequences for our clients’ or our company's reputation.
  • Domain Expertise: While LMMs can access vast amounts of information, they lack the nuanced understanding and domain-specific expertise that our human experts possess. This can lead to drafts that are superficially correct but miss important details or insights.
  • Misrepresentation: Presenting AI-generated drafts as original human work, even if heavily edited, could be seen as deceptive or misleading to clients or other stakeholders.
  • Transparency: Maintaining transparency about our use of AI is important for building trust with clients and the public. If we are relying heavily on AI for drafts, it becomes more difficult to clearly delineate where the AI's contribution ends and our human expertise begins.

Personalization and Targeting

  • Email Marketing: It is acceptable to use AI to add personalization to email campaigns based on customer data, behavior, and preferences, using data that comes from our own CRM systems.
  • Advertising: It is acceptable to use AI to target specific audiences with relevant ads based on demographics, interests, and online behavior, as long as that advertising is done with approved advertising sources and partners.

Chatbots and Customer Service

  • Website Chatbots: It is acceptable to deploy AI-powered chatbots to answer customer questions, provide support, and gather leads, as long as those chatbots are using HMG-developed and approved knowledgebases.
  • Social Media Chatbots: It is acceptable to use AI-powered tools to manage social media inquiries and comments, as long as those chatbots are using HMG-developed and approved knowledgebases.
  • Prioritize Customer Experience in Chatbots: While AI can efficiently generate chatbot content, the primary focus should be on creating a positive and effective customer experience. Any AI-powered chatbot intended for customer interaction must undergo thorough internal testing to ensure that user satisfaction and quality interactions take precedence over efficiency goals.

AI for Visual Content Creation


Ideation and Inspiration:

      • Mood Boards and Concept Exploration: It is acceptable to use AI to generate visual concepts, mood boards, or style suggestions based on our project briefs and specific creative direction.
      • Reference Image Generation: It is acceptable to use AI to create rough sketches, mockups, or reference images that serve as inspiration for human designers and artists.
      • Brainstorming Visual Elements: It is acceptable to employ AI to generate icons, patterns, textures, or other abstract visual components to identify elements could be incorporated into final designs.
Restrictions on Final Output
      • No Direct Use: AI-generated images, videos, or other visual content should never be used as the final deliverable to clients or for HMG public-facing materials.
      • Human Refinement is Mandatory: All AI-generated visuals (including abstracts) must be significantly modified, refined, and enhanced by human designers to ensure they meet brand standards, artistic quality, and project requirements.
      • Copyright and Licensing: Always verify the copyright and licensing terms of any AI-generated content before using it for any external or publication purpose, even as a reference or abstract element. If you cannot verify it, do not use it.
Transparency and Attribution:
    • Internal Documentation: Clearly document the use of AI in all visual content creation processes within project files or internal communications. This can be done by storing AI-generated elements in project folders and/or by providing an “AI-generated Content” attribution in project documentation.
    • Client Communication: If discussing AI-generated concepts or visuals with clients, be transparent about their origins and emphasize that they are for ideation purposes only.

AI Use for Data Analysis and Insights

Using AI for predictive analytics and data visualization is allowed as long as the source data is first party or comes from an authorized and verified source and no data privacy or sensitive content rules are violated. 

Applications for Data Analysis and Insights

  • Market Research: Utilize AI to analyze market trends, competitor activity, and customer sentiment from social media, reviews, or surveys.
  • Customer Segmentation: Leverage AI to identify and segment audiences based on demographics, behavior, or interests.
  • Predictive Analytics: Explore AI tools that can predict customer behavior, churn rates, or campaign performance, allowing you to make data-driven decisions.

Precautions for Data Analysis and Insights

  • Acceptable Tools: Use only AI tools that have been approved for use at HMG by the management team. 
  • Data Privacy and Security:
      • Comply with Regulations: Ensure that your use of AI for data analysis adheres to all relevant data privacy regulations, such as GDPR and CCPA.
      • Anonymize and Protect Sensitive Data: Take precautions to anonymize or pseudonymize personal data before using it for analysis. Robust security measures are imperative to protect sensitive data from unauthorized access or breaches.
  • Human Oversight and Expertise:
    • Do Not Rely Solely on AI: AI should be a tool to augment human expertise, not replace it. Always involve domain experts and analysts to interpret results, validate findings, and make informed decisions.
    • Review Results: Ensure that AI-generated insights are scrutinized by humans for accuracy, relevance, and potential biases.

Final Considerations for All AI Usage

  • Fact-Checking and Verification: Always double-check any facts, statistics, or claims generated by AI against reliable sources to ensure accuracy and avoid misinformation.
  • Ethics: Be transparent about the use of AI in marketing.

Policy Review

This policy will be reviewed and updated regularly to reflect advancements in AI technology and best practices.

 

Note to Readers

You are welcome to borrow and adapt this policy to suit your own company needs. As Hill Management Group does not offer legal advice, we strongly recommend that you have your own corporate counsel review policies prior to publication. 

If you are interested in a thoughtful guideline about how to understand and implement AI tools in your own business, we have published a free ebook on this topic. You can download your free copy after completing the form below.