San Antonio
San Antonio Digital Forensics : LCG Discovery Experts
Address:
306 Morton St. Richmond, TX 77469Latest Blog in The eDiscovery Zone
Forensics and Futures: Navigating Digital Evidence, AI, and Risk in 2026 – Part 2
Series context. Part 2 of the Forensics and Futures 2026 series examines how cloud storage systems reshape digital evidence, expert testimony, and evidentiary risk. As organizations rely on object storage, managed backups, and provider-controlled retention, courts increasingly scrutinize how cloud-stored evidence is authenticated, preserved, and explained by experts. [1]
Cloud Storage Forensics Is an Expert Testimony Problem
Cloud storage is often discussed as a technical or security challenge. In litigation and regulatory proceedings, it is fundamentally a challenge of expert testimony.
Unlike traditional media, cloud storage provides evidence:
Is abstracted from physical hardware
Collected using different mechanisms and tools that do not perform the same way as traditional preservation tools.
It is often collected from a live environment, making it difficult, if not impossible, to validate the entire collection (“image”) using hashes.
May contain metadata that is impacted by the Cloud storage services and may not be consistent with standard file metadata from an isolated physical source.
As a result, expert witnesses are no longer testifying solely about what the evidence shows, but about:
How the storage system functions
How the extraction process was accomplished
What the examiner could and could not control
Where evidentiary uncertainty may exist
Beyond Automation: Why Human Judgment Remains Critical in AI Systems , Part 2: Risk Management Without Humans
Series context. This is Part 2 of Beyond Automation, a multi-part examination of the risks that emerge when organizations remove human judgment from AI-enabled decision systems. Building on Part 1’s analysis of over-automation and silent failure modes, this installment examines how these failures manifest in enterprise risk management. [1]
The Illusion of Precision in AI-Driven Risk Programs
Risk management is among the most aggressively automated enterprise functions. AI-driven platforms now score third-party risk, prioritize fraud alerts, manage AML and KYC screening, and populate enterprise risk registers at machine speed. These systems promise consistency, efficiency, and objectivity in environments that have historically relied on human judgment.
The problem is not automation itself. The problem arises when AI-generated outputs are treated as decisions rather than inputs.
The Mindset Shift – Part 5: From Awareness to Action, Building a Safety Culture
Series context. This is Part 5 of the Mindset Shift series, moving from individual readiness toward organizational culture. After exploring personal preparedness, time and distance, Run–Hide–Fight, and threat anatomy, this installment explains why safety programs fail without cultural reinforcement across leadership, environment, and daily practice. [1]
Why Safety Culture Fails on Paper but Matters in Practice
Many organizations believe they have a strong safety program because they have policies, training modules, and an Emergency Operations Plan (EOP). Yet during real-world incidents, these same organizations often experience communication breakdowns, slow decision-making, and a lack of coordinated response.
The problem is not the absence of documentation. It is the absence of a safety culture that activates those documents. Culture determines whether people speak up, whether leaders reinforce readiness, and whether employees trust the systems designed to keep them safe.




