Eventizing Traditionally Opaque Binary Neural Networks as 1-safe Petri net Models

2026-02-13Machine Learning

Machine Learning
AI summary

The authors present a new way to understand and analyze Binary Neural Networks (BNNs), which use only binary values to save energy but are hard to explain. They use Petri nets, a mathematical tool that represents processes as events, to map out how BNNs work step-by-step. This approach helps reveal how different operations in BNNs depend on each other, making it easier to check if the network behaves correctly and safely. The authors also test their method to ensure it can spot problems like deadlocks and verify that the network's operations happen in the right order.

Binary Neural NetworksPetri netsEvent-driven processesCausal relationshipsFormal verificationDeadlock-freenessConcurrencyReachabilityGradient computationMutual exclusion
Authors
Mohamed Tarraf, Alex Chan, Alex Yakovlev, Rishad Shafik
Abstract
Binary Neural Networks (BNNs) offer a low-complexity and energy-efficient alternative to traditional full-precision neural networks by constraining their weights and activations to binary values. However, their discrete, highly non-linear behavior makes them difficult to explain, validate and formally verify. As a result, BNNs remain largely opaque, limiting their suitability in safety-critical domains, where causal transparency and behavioral guarantees are essential. In this work, we introduce a Petri net (PN)-based framework that captures the BNN's internal operations as event-driven processes. By "eventizing" their operations, we expose their causal relationships and dependencies for a fine-grained analysis of concurrency, ordering, and state evolution. Here, we construct modular PN blueprints for core BNN components including activation, gradient computation and weight updates, and compose them into a complete system-level model. We then validate the composed PN against a reference software-based BNN, verify it against reachability and structural checks to establish 1-safeness, deadlock-freeness, mutual exclusion and correct-by-construction causal sequencing, before we assess its scalability and complexity at segment, component, and system levels using the automated measurement tools in Workcraft. Overall, this framework enables causal introspection of transparent and event-driven BNNs that are amenable to formal reasoning and verification.