A full stack platform to evaluate LLMs and their data against a wide range of safety issues and security threats. Currently, the evaluation covers 40+ tests across 8 services, such as safety alignment, adversarial robustness, data and model privacy, fairness and bias, and code security.