Artificial Intelligence Systems Usage Policy
| Effective Date | April 1, 2026 | Policy Owner | Information Technology Services (ITS) |
| Last Reviewed Date | April 7, 2026 | Approved By | President's Council |
| Review Cycle | Annual | Policy Contact | Information Security & Compliance Analyst |
Purpose
New York Tech has established the following policy to ensure Artificial Intelligent (AI) Systems are used in a manner that is consistent with university policies and applicable laws, protects New York Tech-defined Confidential and Restricted Data, and appropriately addresses any resulting risks to the university and our community.
Scope
This policy applies to all members of the university community including faculty, staff, student workers, contractors, affiliates, and others who are accessing or using artificial intelligence systems and other AI-based technology resources in support of university operations.
Definitions
- Artificial Intelligence Systems (AI Systems): AI systems are engineered or machine-based tools and/or software applications that leverage artificial intelligence algorithms, machine learning, and natural language processing to automate tasks, analyze data and generate content. AI Systems include, but are not limited to, Generative AI/LLM Models, Agentic AI, Artificial General Intelligence, Vision Language Models, and Specialized AI Chatbots.
- Intellectual Property (IP): IP consists of intangible assets, such as data, models, or information, that are developed through academic research, or scholarly activities that are legally protectable.
- Confidential Data: Data which is legally regulated, and data that would provide access to confidential or restricted data. Data is typically classified as Confidential when the unauthorized disclosure, alteration, or destruction of that data could cause a significant level of risk to the institution or its affiliates.
- Restricted Data: Data where a decision has been made by the university to not publish or make public, and data protected by contractual obligations. Data is typically classified as Restricted when the unauthorized disclosure, alteration, or destruction of that data could result in a moderate level of risk to the institution or its affiliates.
- Personal Information: Personal Information is any information relating to an individual that identifies or can reasonably be used to identify an individual, directly or indirectly (including in combination with other data), by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the identity of an individual. Personal Information includes information that has been de-identified but could reasonably be re-associated with an individual including through the use of AI systems.
Policy Statement
New York Tech expects all New York Tech community members to abide by this policy when using AI Systems for teaching and learning, research, and work-related functions:
- Procuring AI Systems (including free tools): Submit a Software Request Form before purchasing (or acquiring for free) AI products or services that contain functions that rely on AI to operate. New York Tech's ITS team and General Counsel will route the request to resources that can help validate the vendor's product and verify that the contract language does not introduce undue risk to the university. These processes can also direct you to existing vendors who have already been vetted and potentially avoids duplicate spending.
- Do not input Confidential or Restricted Data: New York Tech community members must not input any Confidential or Restricted Data into AI Systems unless they have been officially approved and authorized by ITS. This includes, but is not limited to PII, PHI, Personal Information, or proprietary New York Institute of Technology information.
- Do not input information that violates IP or general contract terms and conditions: New York Tech community members must be aware of the terms and conditions under which they are using an AI System. All members of the New York Tech community must respect IP (Intellectual Property) rights with the goal of protecting those rights. It is incumbent on the individual users to ensure that the inputs and the outputs of their AI Systems are properly protected for reasons such as copyright and patent laws, data protection regulations, and identity theft crimes. Please note that vendor licenses govern many of the digital resources provided by New York Tech, and some publishers are asserting that using their content with AI Systems is not allowed.
- Confirm the accuracy of the output provided by AI Systems: New York Tech community members who use AI Systems in their work are also responsible for checking the accuracy of any information or action generated by AI Systems. AI Systems cannot be relied upon without confirmation of accuracy from additional sources. It is possible for AI-generated content to be inaccurate, biased, or entirely fabricated (i.e., "hallucinations"). Note that such AI-generated content may contain copyrighted material. You are responsible for any content that you publish that includes AI-generated output.
- Check the output of AI Systems for bias: New York Tech community members must consider whether the data input into, and the output of, AI Systems produces results that may have a biased impact to individuals based on their protected classifications under applicable law, such as race, ethnicity, national origin, age, sexual orientation, or disability status. Do not rely on any output that is indicative of a potential bias.
- Disclose the use of AI Systems: New York Tech community members who leverage AI Systems to generate or summarize content must provide appropriate notification and attribution. This includes notification of recording of calls or meetings, and sharing outputs of meetings (i.e. meeting notes, minutes, summaries, or action items) with all who attended the meeting. Before sharing any outputs, ensure there is no Confidential or Restricted information contained within.
- Comply with third-party Intellectual Property rights: New York Tech community members must not claim output generated by AI Systems as their own. If you quote, paraphrase or borrow ideas from the output of an AI System, confirm that the output is accurate and that you are not plagiarizing another party's existing work or otherwise violating another party's Intellectual Property rights.
- Do not use AI Systems to produce malicious content: New York Tech community members are prohibited from using AI Systems to generate malicious content, such as malware, viruses, worms, and trojan horses that may have the ability to circumvent access control measures put in place by New York Tech, or any other third-party entity, to prevent unauthorized access to their respective networks.
Please note that AI Systems and tools require significant processing power, and the resources required have an environmental impact. Be mindful of that environmental impact when using these tools.
Related Internal Policies
- Acceptable Use Policy
- Data Security and Access Management Policy
- Non-Discrimination and Discriminatory Harassment Policy
Regulatory References
The following are references to related federal and state laws, policies, guidelines, and resources on cybersecurity.
- Federal NIST National Institute of Standards and Technology, U.S. Department of Commerce, Information Technology Laboratory, Computer Security Division, Computer Security Resource Center.
- Federal Legislation
- HIPAA (Health Insurance Portability and Accountability Act)
- FRCP (Federal Rules of Civil Procedure – a.k.a. eDiscovery)
- FERPA (Family Educational Rights and Privacy Act)
- GLBA (Gramm-Leach-Bliley Act)
- FISMA (Federal Information Security Modernization Act)
- State Regulations
- SHIELD Act (New York's Stop Hacks and Improve Electronic Data Security Act) and other state security regulations
- Associations
- PCI DSS (Payment Card Industry Data Security Standard)
- International
- GDPR (European Union's General Data Protection Regulation)
- PIPEDA (Canadian Personal Information Protection and Electronic Documents Act)
- PIPA (British Columbia's Personal Information Protection Act)
Forms Associated with the Policy
Violations
Violations of the policy may result in disciplinary action, including dismissal from employment, expulsion from further study, and termination, or suspension of IT and network privileges. In addition, if an employee or contract's conduct violates federal or state laws, the employee or contractor may be subject to prosecution under such laws. The university reserves the right to investigate suspected violations using all means available.