The discussion of Artificial Intelligence (“AI”) in the workplace typically focuses on whether the AI tool and model has a discriminatory impact. This means examining whether the AI output creates an unlawful disparate impact against individuals belonging to a protected category.
However, that discussion rarely centers on the types of training data used, and whether the training data itself could have a harmful effect on the workers tasked with training the AI model.
It has been four years since Congress enacted the Eliminating Kickbacks in Recovery Act (“EKRA”), codified at 18 U.S.C. § 220. EKRA initially targeted patient brokering and kickback schemes within the addiction treatment and recovery spaces. However, since EKRA was expansively drafted to also apply to clinical laboratories (it applies to improper referrals for any “service”, regardless of the payor), public as well as private insurance plans and even self-pay patients fall within the reach of the statute.
Creative and aggressive plaintiffs’ lawyers are forever on the hunt for new theories under which to bring potentially lucrative class action lawsuits utilizing plaintiff-friendly state consumer protection statutes (with California being the most favored forum). The dietary supplement industry has been in the plaintiffs bar’s cross-hairs for more than a decade now. As the case law has evolved and developed, supplement companies have had notable success fighting these suits. Just last week, Judge Miller in the Southern District of California tossed a proposed class action ...
Blog Editors
Recent Updates
- Service and Justice: Veterans in Law – Speaking of Litigation Video Podcast
- Sixth Circuit Says It Again: Outside Counsel’s Internal Investigations Are Privileged and Protected from Disclosure
- Eleventh Circuit Allows Qui Tam Relators to Avoid Complaint Dismissal by Using Information Obtained in Discovery
- EDPA Strengthens Its Approach to White-Collar Enforcement
- Texas’s Expanded Telemarketing Restrictions Go Into Effect