Model Performance & Architecture

Comprehensive evaluation of OculusAI's dual deep learning models: the eye disease detection model and the Ishihara digit recognition model for colour blindness testing.

Model Architecture

Eye Disease Detection CNN

Architecture Details:

  • Framework: TensorFlow/Keras
  • Input Size: 256 × 256 × 3 (RGB)
  • Model Type: Deep CNN
  • Output Classes: 4 (Cataract, Diabetic Retinopathy, Glaucoma, Normal)

Training Configuration:

  • Optimizer: Adam
  • Loss Function: Categorical Crossentropy
  • Activation: ReLU (hidden), Softmax (output)
  • Data Augmentation: Rotation, Flip, Zoom

Network Flow

Input Layer

256×256×3

Conv Layers

Feature Extraction

Pooling

Dimensionality↓

Dense Layers

Classification

Output

4 Classes

Confusion Matrix

Performance breakdown showing predicted vs actual classifications across all disease categories.

Predicted Class
ActualCataractDRGlaucomaNormal
Cataract85843
DR58843
Glaucoma63874
Normal23293

85%

Cataract Accuracy

88%

DR Accuracy

87%

Glaucoma Accuracy

93%

Normal Accuracy

Training Performance

Accuracy Over Epochs

Model achieved ~89% validation accuracy after 8 epochs

Loss Over Epochs

Training loss decreased steadily, converging around 0.38

System Architecture

Frontend

Next.js 15

React 19

TypeScript

Tailwind CSS

Backend

Flask API

Python 3.11.9

TensorFlow 2.x

Keras 3.x

ML Model

Deep CNN

.keras format

4-class output

256×256 input

Data Flow

Retina Image Upload
Next.js Frontend
Flask API (/api/predict)
Eye Disease Model
Disease Classification

Performance Summary

~89%
Overall Accuracy
4
Disease Classes
<5s
Inference Time
256²
Input Resolution