/PatientLedAI_Hugo

Suggested edits on self-determination

MIT LicenseMIT

Patient Rights in AI - Version 1

Note: this is an open source, text-based version of the "Patient Rights in AI" document. You can also access a formatted PDF copy of the document here. Please see the last section here for ways to provide feedback.

Table of Contents

About This Document

This document was created by a cohort convened by The Light Collective, a national community of patient activists, clinicians, and health data experts committed to building tech and digital rights for patients. The cohort is a diverse task force of patient-oriented experts and community leaders to craft patient-led standards for the design, use, and governance of artificial intelligence (AI) in healthcare. Our aim is to define standards and rules in health AI to advance the rights, interests, and concerns of patient communities in health technology. This is a living document. A group of thirteen is not ‘representative’ of all patients. As The Light Collective moves forward, it continues to solicit and incorporate perspectives and voices of a wide range of communities - see here for ways that you can provide your input and feedback into future versions of this document.

Disclaimer

The collective impact of AI holds the potential to advance research, accelerate cures, and improve care. Simultaneously–as it pertains to patient health–it is one of the most concerning issues of our time. In shaping this document and reflecting on patient rights with respect to artificial intelligence in healthcare, we wrestled with our duty to the individual and the diverse communities of which we are a part.

This document outlines a set of patient rights we believe are critical to the creation and application of AI in healthcare. This patient-led declaration of rights is necessary to establish the patient voice in initiatives that are already forming AI standards, codes of conduct, and Bills of Rights, and that do not include representation from patients. Our aim for this document is to establish a baseline for patient concerns and remedies in Health AI.

We believe that health AI should be guided by the voices and perspectives of patients and diverse racial and ethnic communities. We should have a say in any decisions that affect us. Nothing should be decided about us, without us.

Patient Rights in Health Care Artificial Intelligence: A Call to Action

In the current landscape, health systems and technology developers are fervently embracing artificial intelligence (AI) tools to assist clinicians in their roles, and facilitate various other processes in the healthcare system. However, amid this haste, there persists a significant oversight—the exclusion of patient perspectives in the design and governance of AI solutions. The omission of patient representation repeats history we do not want to see: a power dynamic where those with the most influence in healthcare at best tokenize and at worst ignore the voices of those they serve. In doing so, they risk creating technologies that not only fail to address some of the most fundamental problems they were designed to affect–improved care and better outcomes for patients–but that exacerbate existing disparities and directly cause additional harm.

Crucial questions emerge that we must consider: How might we ensure equal benefit for all, while upholding patient rights in the process? How might we build AI systems that ensure patient community oversight of the ways in which this emerging technology is harnessed in healthcare? How can we build transparency and truly informed consent into uses of AI in healthcare? How might we build governance systems that center the perspectives, interests, and needs of the communities who are most impacted by AI?

As individuals with extensive lived experiences navigating health challenges, we recognize the potential power of AI and big data to transform healthcare. However, for AI to be truly transformative, it must embody qualities of safety, accuracy, equity, and above all, respect for the individuals upon whom the algorithms are trained and utilized.

The inclusion of patient voices in health policy and decision-making around AI models is not just desirable but a matter of life and death for vulnerable patient populations. By actively involving those who have spent considerable time navigating the complexities of healthcare firsthand, we aim to ensure that AI development aligns with the diverse needs and rights of the individuals it aims to serve. This inclusivity is not just a moral imperative; it is foundational to the responsible and effective deployment of AI in health and in care. These insights are integral to shaping a healthcare AI landscape that is impactful, ethical, and transparent [1][2][3].

Foreword

Now, more than ever, technology enables seemingly irrelevant sources of behavioral and personal data to be used for the benefits of health or healthcare. However, the use of such data is often hidden or unclear to the generators of the data—and its original owners (i.e., patients)—which can contribute to privacy violations and potential harm, while at the same time blocking patient access to their own personal data.

For example, digital technologies—such as hospital portals, search engines, apps, and online support groups—can be vital, often life-saving tools for people navigating health challenges. However, these technologies may also expose patients to privacy breaches, illegal data sharing, exploitation of health vulnerabilities, mis- and dis-information[4], fraud, and other harms, which we refer to as “cyber harms” [5][6]. Patients are also likely to be limited in their ability to use health technologies for positive purposes because they are unable to access their own data or are constrained in terms of the portability and accessibility of their data, even between their own personal devices.

In addition, as patient communities use these technologies, we generate new potential for harm or threats to patient safety, while enriching the companies that collect and control these technologies[7]. To date, there is little transparency or accountability for the variety of cyber harms that impact patient populations. Patients and patient communities typically do not have any voice in how their information is used and shared in the deployment of health AI.

We therefore advocate for the creation of an independent, patient-led governance body for health technology. This body will define digital rights for health AI and collaborate with regulatory authorities and stakeholders to enforce these rights.

In 2019, the Global Indigenous Data Alliance (GIDA) developed and articulated the CARE Principles for Indigenous Data Governance (PDF) and a set of rights around ‘data for governance’ and ‘governance of data’. We cite these principles as inspiration for the definition of patient rights in health AI, and we advocate for data sovereignty and enforceable legal rights for patient communities.

Below we outline specific rights and protections we seek in the context of digital technologies and personal data in the era of health AI. Further, we outline for each right what it means in plain language, why it matters, and how it might be actualized in practice.

COLLECTIVE RIGHTS FOR PATIENTS IN HEALTH AI:

1. Patient-Led Governance
Patients are true co-leaders on priorities. Patients collectively have the right to co-create the rules that govern how artificial intelligence (AI) is designed and used in healthcare.
What it Means Why it’s Important What it Looks Like in Action
Patients whose lives are at stake must be key contributors in designing and developing the rules and standards for how AI is applied in healthcare and in health technologies. Health AI development and usage need to be aligned with real-world needs of patients. The assertion of this right also provides an opportunity to rebalance power in healthcare toward communities who are affected by healthcare decisions outsourced to AI. Patient representation must be included as active participants in design, policymaking, and development of rules that govern AI.

The Health AI ‘design lifecycle’ must include patients to co-develop priorities, evaluation metrics, & feasibility measures at every step.

Patient community representatives must be active voting members of AI governance bodies.

2. Independent Duty to Patients
In order to adopt fair, safe, and equitable health AI, patient communities require representation that holds legally enforceable and independent duty of loyalty to improve outcomes for patients[8].

This duty to patients must be part of any negotiations, and must be held independently from fiduciary duties to financial shareholders.
What it Means Why it’s Important What it Looks Like in Action
A founding ideal of medicine is the fiduciary relationship between doctors and their patients, in which doctors must put the well-being of the patient above their own self-interest. There is an established duty of loyalty to the patient as a primary stakeholder and beneficiary of clinical care.

Duty of loyalty is a legally binding duty to act in the best interest of patients, if and when those interests conflict with other stakeholders in health AI.

Integration of diverse patient representation into the practice of developing AI as equal members of a team, voting members of any governance bodies, and/or co-principal investigators in publications. Establishing fiduciary relationships to require AI stewardship for patient interests.
AI is starting to make big decisions in healthcare, like who gets what kind of treatment based on health risks or behaviors. While doctors have to follow strict rules because of their medical licenses and promises like the Hippocratic Oath, companies that make AI are not guided by these same rules.

If AI replaces the duties of doctors, we need to make sure that AI also has a strong duty to independently serve patients' interests.

Tech companies usually aim to make as much money as possible for their shareholders, which might not always be good for patient outcomes. Therefore, when AI plays a role in healthcare, patients require independent representation with a duty to ensure patients are treated fairly and well, without any other conflicts getting in the way.
Integration of diverse patient representation into the practice of developing AI as equal members of a team, voting members of any governance bodies, and/or co-principal investigators in publications.

In practice, this also means establishing fiduciary relationships to require AI stewardship for patient interests. See this paper.

3. Transparency
Transparency in the development and use of health AI is critical to ensure grounding in scientific evidence, clear clinical benefit, and mitigation of harms.
What it Means Why it’s Important What it Looks Like in Action
Transparency is necessary in three ways:

1. Patients should be informed about why and how their data are being used in generative or predictive AI models

2. Patients should be informed when guidance or communication is based on AI rather than direct human input

3. Patients require access to accurate and reproducible evidence of the efficacy of an AI application in care.
Transparency builds trust between patients and health AI providers, ensures patients’ agency in decisions affecting their health, and informs users how AI tools have been developed and whether they are clinically validated and grounded in scientific evidence. Patients have the right to be informed in culturally and linguistically appropriate ways to understand:

  1. When AI is involved in their care (e.g., diagnosis, treatment) and/or interactions with their providers.

  2. Where/How AI tools get their information (i.e. traceability).

  3. What the risks and benefits of specific AI uses in care.

  4. When evidence of biases or inaccuracies have been identified in a way that has impacted care.

In addition, AI outputs must be recognized as a part of a patient’s designated record set (DRS) and individuals have the right to access outputs if the information is used to make decisions about their care or coordination of care.

Audit and certification of health AI must be independent and separated from industry or institution financial interests.

4. Self-Determination
AI should be developed and used in a way that enables patients to exercise the fundamental right to make informed choices about their own health and healthcare.
What it Means Why it’s Important What it Looks Like in Action
Patients have the right to self-determination.

As such, patients should have a choice to accept or decline an AI intervention in care.
Self-determination is important because AI is increasingly used to determine a patient’s access to care or a specific treatment.

AI Predictions about health risk are no replacement for informed decision-making. For example, if AI is making predictions about a patient’s health risk and those predictions are inaccurate, then a patient may lose benefits, choices, or access to care.

Patients must reserve the right to defy odds, even if odds are not in their own favor. Patients must have autonomy when it comes to separation of their private lives from decisions about care.

Patients should be given the chance to make an informed decision to opt out or appeal AI generated decisions.

If a predictive algorithm is used to determine a patient’s access to a certain treatment, there should be a right to appeal the decision.

For example, AI should not be leveraged to deny care, limit choices, or ration care that is otherwise considered standard of care.

For example, social media posts or online purchases should not be sourced to create “risks scores” about patient behavior or health that are used in clinical practice.

5. Identity Security and Privacy
Patients have the right to expect that their security and privacy will be prioritized in the design and use of Health AI. Preservation of identity means patients have a right to choose how to share, disclose, all or parts of their identity.
What it Means Why it’s Important What it Looks Like in Action
Health AI must be designed, developed, and used to protect or improve the safety, privacy, and confidential choices of any patient or community in a way that protects patients’ individual and shared identities. Security isn’t just about protecting data or businesses’ assets.

It’s about protecting patient lives, safety, and choices when AI can make life-altering determinations, predictions, or diagnoses about a patient’s health that may impact clinical safety as well as individual well being. [10]

“Anonymizing” a patient does not prevent harm to a specific part of their individual, ethnic, gender, or health identity if AI tools are used to target, manipulate, or generally misrepresent identity.

A priority in any design process should be a clear articulation of risk for patient privacy and cybersecurity.

AI should not be leveraged to target, manipulate, misinform, or scam patients or people with disabilities.[11]

In order to stop proliferation of medical mis - and dis- information, Health AI services focused on adtech or marketing should be banned from use on social media.[12]

Health institutions have a responsibility to proactively research and disclose security vulnerabilities, and reduce risks for AI to be weaponized by bad actors in bad faith.

6. Right of Action
Risk sharing requires tangible ways for Health AI developers to have accountability with affected communities. If there is evidence that certain uses of AI cause harm, patients must have the right to stop and remedy further harm through legally enforceable action.
What it Means Why it’s Important What it Looks Like in Action
Businesses and institutions must hold the burden of risk for people affected by the health AI solutions that those organizations deploy.

If there is evidence that certain uses of AI cause harm, patients should have legal recourse to stop further harm.
Health information is sometimes used to discriminate against people with illness and/or disabilities in bad faith.

People affected the most by the development of AI must hold the resources & rights to take legal action if harm occurs.
This right requires enforcement of policy to regulate health AI, with a focus on the protection of patient interests.

Legal action against entities that violate privacy or misuse data is necessary for legally enforceable accountability and justice.

Arbitration clauses and waivers of rights should be banned from consent privacy or policies.

7. Shared Benefit
Diverse patient communities must equitably share in the benefits created as a result of health AI.[13]
What it Means Why it’s Important What it Looks Like in Action
Diverse communities who make the highest risk contributions to AI must also have an equity stake or share in the benefits.

Patient communities inherently hold the burden of risk as AI is adopted in clinical practice.[14]

Health AI should be created to establish patient communities as equal partners with respect to industry and research institutions developing health AI.
Historically digital medicine has relied on unpaid labor of patients to “engage” with researchers and technologists. When patient advocacy only allows those with privilege to donate their time and data, we build biased knowledge systems that serve to deepen health, knowledge, and economic disparities.

Training AI on community data only serves to further health disparities without building capacity for diverse patients and people with disabilities to share benefits.

Sharing equally in the benefits of AI can help promote human rights, social justice and inclusion in the development and deployment of AI.
Health AI and the health data marketplace should not be stolen or built upon the unpaid labor of patient communities.

As our collective data is increasingly commodified, institutions, governments, and industry must actively give back to patient communities.

Resources and funding must be shared back with communities to create shared infrastructure, education, and projects driven by patient priorities, and led by diverse patient communities.

Appendix

Definitions/Terminology

  • Algorithm: A computer-generated set of instructions created by looking at a patient or community’s health data and finding patterns. Algorithms in health can use data to predict a person’s risk of a certain health outcome.
  • Algorithmic Transparency: The principle that the mechanisms of decision-making algorithms, especially in AI, should be open and understandable to users and other stakeholders. If the algorithm is a “black box” or protected for intellectual property reasons, the mechanisms by which they generally work and/or the outputs should be transparent, understandable, and auditable to/by the user.
  • Anonymization: The process of removing personally identifiable information from data sets, so that the people whom the data describe remain anonymous and unidentifiable.
  • Artificial Intelligence (AI): Technology that can do any of the following: learning from data, making decisions, solving problems, understanding natural language, recognizing patterns, and adapting to new information. AI refers to the development and application of algorithms and software to process complex medical data, assist in diagnosis, personalize treatment plans, and enhance healthcare delivery overall.
  • Bad Actors: We define bad actors in this document as those who knowingly work against the interest of a patient or patient community. Bad actors could be cyber criminals, scammers, or companies wishing to leverage or monetize patient data in ways that cause harm to an individual or community.
  • Conflict of Interest: A conflict of interest happens when a person or organization’s own interests might interfere with their ability to do their job or make fair decisions on behalf of a patient.
  • Cybersecurity: The practice of protecting systems, networks, and programs from digital attacks, theft, or harm to a person or property.
  • Data Collective: A group arrangement where data from many individuals is aggregated into a shared collective pool.
  • Data Portability: The principle that individuals have the right to receive, transfer, and use their personal data across different services.
  • Designated Record Set (DRS): Under HIPAA, a Designated Record Set refers to all of the health and billing records a healthcare provider or plan uses to make decisions about an individual patient’s care. HIPAA ensures that patients have the right to access and review these records to verify their accuracy.
  • Discrimination: Limiting choices, access to care, or outcomes of different people based on their medical conditions, disabilities, race, ethnicity, gender choice, age, language, socio-economic and legal status.
  • Fiduciary: A fiduciary is someone legally required to act in the best interests of another person, putting those interests above their own. For example, in an attorney-client relationship, the attorney has a fiduciary duty to represent the client's interests faithfully and confidentially, ensuring their legal rights are protected.
  • Health AI: A branch of artificial intelligence technology that focuses on the creation of systems capable of processing health-related data, making decisions, and performing actions with minimal human intervention. It's used in various healthcare applications, from diagnostics to treatment recommendations.
  • Health Data Breach: An incident in which sensitive, protected, or confidential health data is accessed or disclosed without authorization.
  • Governance: The process for making decisions and rules. “Technology Governance” is about making rules and policies that a group of people, like a hospital or a health tech company, must use to make sure health technology is used in a good and safe way.
  • Informed Consent: The process by which patients are fully informed about the procedures and risks involved in a healthcare intervention or technology. Informed consent requires patients to learn about and understand the benefits or risks to them before they choose to use that technology.
  • Legal Recourse: The right of an individual to seek or attain legal remedy in court due to a loss or harm, breach of contract, violation of privacy, or misuse of data.
  • Patient Autonomy: The right of patients to make informed decisions about their own healthcare, free from coercion or interference from others.
  • Patient-Governed Data Trusts: Entities established to manage patient data, where the control and decision-making authority lies with the patients themselves. These trusts have a duty of loyalty to prioritize patient interests in the use of their health data.
  • Predictive Profiling: The use of data analysis tools to predict individuals' future behavior or health outcomes based on their personal data, whether it is health-specific data or other types of data.
  • Traceability: The ability to know the source and handling of data that is used in health AI.[15]

Example Types of Health AI

This section outlines examples of two different types of Artificial Intelligence that may be used in a healthcare setting, along with potential risks or harms associated with their use.

Health AI Category Who Example Patient risks/harms
Predictive AI
(Advanced analytics, Machine learning, Deep learning)
Patient Using an AI-powered fertility tracking app that predicts ovulation windows. Risk of data privacy breaches. Risk that model accuracy wanes.
Provider Diagnosis:

Developing predictive models for diagnosing diseases early, such as cancer detection from imaging data.
Bias in training data leading to inaccurate or unfair outcomes; overreliance may reduce human oversight.
Treatment:

Using a predictive AI model for treatment recommendations such as the optimal medication for treating hypertension or diabetes.
Patients with multiple comorbidities may not have the same outcome if being compared to patients without those same comorbidities. Algorithms may reinforce existing disparities and cause direct harms to patients.
Payor Using AI to determine a patient’s likelihood of paying for care and determining whether or not they would approve care. This causes direct harms to patients when potentially biased algorithms deny care to patients and regulation is moving against payors being able to use AI models in this space.
Generative AI
(A type of AI that generates text, images, other content types)
Patient Using GPT-4 to simplify complex medical jargon into easier to understand language. Risk to patients to consider is that the model may not accurately translate medical information and furthermore may share data with unwanted third parties.
Provider Using GPT-4 to listen to doctor-patient conversation and generate documentation. Patients may not be aware that the healthcare system is using AI; in the space of documentation there may be errors that the system creates that patients should be able to review. Furthermore, the inclusion of these systems can interfere with human interaction between patient and clinician, which has the potential to disrupt trust if implemented improperly.

Case Examples/Vignette Collection:

  • False Risk: Amelia is a newly diagnosed stage 4 cancer patient. She had a recurrence after an early stage diagnosis. Looking for help, she finds an app that she feels might be able to help her. This app uses AI to match patients to clinical trials. Amelia downloads the app, and puts in her medical history. She finds that the app does not have information on her specific subtype. With later involvement with this app, the AI output from this app shares information on clinical trials for brain mets. Amelia does not have brain mets. Amelia wonders if she does in fact have brain mets, and questions if her oncologist has not told her everything.

  • Early Detection: Steven is an avid runner, watches what he eats, and is in overall good health. For a period of two days, he is not feeling well, and assumes he has the flu. He goes to see his primary care physician. The PCP’s office has started using AI to track patients' wellness and potential symptoms. The report back gave a warning about a possible AKI (Acute Kidney Injury). Steven was able to get the help he needed before, his condition became worse. AI can now predict an AKI up to 2 days before the injury occurs, whereas these types of injuries are often not diagnosed until after they have happened.

  • Automated Denials: Jack needs a specific medication to treat the cancer he has been diagnosed with. The insurance company using AI to automate medication approvals, denies the coverage. Jack’s oncologist, with the help of AI, submits an appeal on the medical necessity of this medication for the diagnosis. The insurance company responds with another denial (using AI as its only source of information). This back and forth continues for weeks, until finally, a judge (human) needs to make the final decision on whether Jack should receive his treatment. The insurance company solely relied on AI-powered decision-making without human intervention to make its “informed” decision.

  • Targeting A Community: Maria organizes a community on social media, with the goal of helping her community find support, resources, and access to research. Unbeknownst to Maria and the participants, a health startup begins to develop a list of members. Another startup scrapes the posts of the group to train to predict suicidal behavior among people with her condition without consent, and the predictive model starts getting used in clinical care. A pharma marketing firm acquires the members’ posts and uses the data to create predictive profiles of the members to better target them with advertisements. A malicious actor acquires the list of Maria’s community, and uses the information to target vaccine misinformation during the Covid-19 pandemic.

  • Early Discharge: Carol is a rare disease patient with multiple comorbidities. After being admitted to the hospital with an injury, her care team used an AI model to determine her treatment options based on her risk of developing sepsis. Because these algorithms are based on others without her additional comorbidities, the algorithm did not represent her risk accurately. Carol is recommended for early discharge from her hospital stay, and she develops sepsis within 24 hours, despite what the risk model predicted for Carol. After further investigation and independent validation of the algorithm, it turns out that many patients were discharged early due to a highly inaccurate, newly deployed AI model.

Patient Rights in AI Working Group: About the Authors

This cohort represents some diverse perspectives, but does not represent all perspectives nor perfectly represent those that are included. Some of the perspectives reflected within this group of authors includes:

  • Cordovano, Grace; deBronkart, Dave; Downing, Andrea; Duron, Ysabel; Glenn, Lesley Kailani; Holdren,Jill; Karmo, Maimah; Lewis, Dana; Murphy, Marlena; Robinson, Valencia; Salmi, Liz; Sarabu, Chethan; Von Raesfeld, Christine

Grace Cordovano, PhD, BCPA

  • Cancer misdiagnosis patient | primary carepartner to 2 disabled adults
  • Board Certified Patient Advocate (BCPA)
  • Member, HITAC Interoperability Standards Workgroup
  • Member, HIMSS Public Policy Committee
  • Co-Chair, The Sequoia Project, Consumer Voices Workgroup
  • Advisor, HLTH Foundation Techquity Coalition
  • Fellow, CancerX
  • Member, NAM AI Code of Conduct Project
  • Patient-In-Residence, Digital Medicine Society

Dave deBronkart

  • Survivor of near-fatal kidney cancer, 2007
  • Avid user of digital health technologies
  • Evangelist for patient empowerment, especially through health data access and use
  • Co-founder and Chair Emeritus, Society for Participatory Medicine
  • BMJ Patient Advisory Panel inaugural member, 2014-2020
  • Founding co-chair, HL7 Patient Empowerment Workgroup
  • OpenNotes Advisory Board

Andrea Downing

  • Co-Founder of The Light Collective
  • BRCA Community Advocate, Security Researcher

Ysabel Duron, BA

  • 23 year Hodgkins Lymphoma survivor, award winning journalist, Latino agency leader
  • Founder/Executive Director, The Latino Cancer Institute

Lesley Kailani Glenn, BS

  • Native Hawaiian, living 11 years with metastatic breast cancer
  • CEO||Founder - Project Life - a virtual wellness house for those living with metastatic breast cancer

Jill Holdren

  • Co-Founder of The Light Collective
  • Patient Advocate, Hereditary Ovarian and Appendicial NET Cancer Survivor

Dana Lewis

Marlena Murphy, MA

  • African American patient advocate residing in the southeast living with metastatic triple-negative breast cancer (TNBC)
  • Program Manager, Guiding Researchers & Advocates to Scientific Partnerships (GRASP)

Valencia Robinson, Ed.S

  • Co-founder of The Light Collective
  • 17 year Triple Negative Breast Cancer Survivor, Patient Advocate

Liz Salmi, AS

  • Person living with a malignant brain tumor for 17 years
  • Co-Founder, #BTSM (Brain Tumor Social Media)
  • Communications & Patient Initiatives Director, OpenNotes, Department of Medicine, Beth Israel Deaconess Medical Center
  • Former Member, Board of Directors, National Brain Tumor Society

Chethan Sarabu MD

  • Patient with non-neurogenic neurogenic bladder of unknown etiology since the age of approximately 18 months of age
  • Significant time spent as family carepartner for those with cancer and dementia
  • Clinical Assistant Professor, Stanford Medicine
  • Board certified in Pediatrics and Clinical Informatics
  • Board member, The Light Collective

Christine Von Raesfeld

  • Rare / Autoimmune / Undiagnosed patient
  • Cofounder, People With Empathy
  • Board member, Light Collective
  • NIH All of Us, committee for Access Privacy and Security, Participant Ambassador, Rare Disease Subcommittee
  • Member Patient Senate, Patients Rising
  • Partnership for Quality measures, committee member
  • Advisor, Research to the People Stanford Medicine

Revisions/Process/Giving Feedback For This Document

This is a living document, subject to updates and revisions as the field of health AI evolves and as more patient voices and perspectives are incorporated. The Light Collective is committed to a transparent revision process, actively seeking feedback from a diverse array of stakeholders within the patient community and beyond. All updates to this document will be documented and made publicly accessible to ensure transparency and inclusivity in the development of patient rights in health AI.

Here are different ways that you can provide feedback:

Way to give Feedback Type of Feedback Audience
Github - Start a Discussion Question, comment, idea for a future version of the document or otherwise public-facing feedback you think others interested in this document should be aware of. Click here to start a discussion. Public (TLC will also review and collate alongside other feedback when considering revisions)
Github - Make a proposed edit to this document A pull request to edit this document directly. If you are unfamiliar, Github has a guide to editing content and making pull requests here. Public (TLC will also review and collate alongside other feedback when considering revisions)
Feedback form Fill out this form which will guide you to give feedback about your perspective and thoughts on the document TLC will review; anonymized version of feedback may be collated and shared with the working group.
Email You can also email your feedback to contact@lightcollective.org TLC will review; anonymized version of feedback may be collated and shared with the working group.