Skip to content

zarkara/ResNet-Damage-Assessor

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Disaster Zone Damage Assessment System

A real-time damage assessment system that uses deep learning to analyze video feeds from drones and other sources to quantify and classify damage in disaster zones.

Overview

This system provides automated damage assessment capabilities by processing video feeds in real-time using ResNet-50 architecture combined with object detection. It can analyze both live webcam feeds and pre-recorded video footage to identify, classify, and quantify damage to infrastructure and buildings in disaster-affected areas.

Features

  • Real-time video analysis from webcam or local video files
  • Object detection and damage classification
  • Severity assessment (Minor, Moderate, Severe, Catastrophic)
  • Live statistics and visualization
  • GPU-accelerated processing for production deployment

Why ResNet-50?

ResNet-50 is particularly well-suited for damage assessment tasks for several key reasons:

  1. Deep Feature Extraction

    • The 50-layer architecture allows for hierarchical feature learning
    • Early layers capture basic patterns (edges, textures)
    • Deeper layers learn complex damage patterns (cracks, collapses, structural failures)
  2. Residual Learning

    • Skip connections help maintain gradient flow
    • Enables better training of very deep networks
    • Particularly useful for detecting subtle damage patterns
  3. Transfer Learning Benefits

    • Pre-trained weights on ImageNet provide excellent starting features
    • Requires less training data for damage-specific fine-tuning
    • Generalizes well to various types of infrastructure
  4. Production-Ready Performance

    • Balanced architecture between depth and computation
    • Optimized for GPU acceleration
    • Good inference speed for real-time processing

Technical Stack

  • Frontend: React with TypeScript
  • Deep Learning: TensorFlow.js
  • Object Detection: COCO-SSD
  • Visualization: Chart.js
  • Styling: Tailwind CSS

Production Deployment

The system is designed for GPU-accelerated deployment with:

  • NVIDIA Tesla V100/A100 support
  • CUDA optimization
  • TensorRT integration
  • Kubernetes orchestration

Getting Started

  1. Install dependencies:

    npm install
  2. Start the development server:

    npm run dev
  3. Access the application and select input source:

    • Use webcam for live analysis
    • Upload local video files for offline analysis

Production Scaling

For production deployment, refer to the prod/README.md for detailed instructions on:

  • GPU cluster setup
  • Kubernetes deployment
  • Load balancing
  • Monitoring and logging

Performance Considerations

The system is optimized for:

  • Real-time processing (30 FPS target)
  • Multi-GPU scaling
  • Batch processing for efficiency
  • Memory management for continuous operation

Future Enhancements

  • Multi-model ensemble for improved accuracy
  • Automated report generation
  • Integration with GIS systems
  • Mobile device support
  • Edge deployment options

About

ResNet Video Damage Assessor browser sample

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •