Skip to content

ModelScan


Summary

ModelScan is a security-focused static analysis tool for machine learning model artifacts that detects unsafe and potentially malicious serialization behaviors. It inspects popular model formats to identify dangerous constructs such as unsafe pickle operations and executable Lambda layers. ModelScan is typically used in CI/CD pipelines, artifact intake workflows, and security reviews to reduce supply-chain risk from untrusted or third-party models.


Key Takeaways

  • Detects unsafe deserialization patterns in ML model files, including pickle-based code execution risks
  • Supports multiple model formats with format-specific scanners (PyTorch, Keras, HDF5, ONNX)
  • Designed for CLI-driven and automated security workflows rather than model execution
  • Extensible scanner architecture

code/security-tools/scanners/modelscan


Additional Sources

-


License

Apache-2.0