Skip to content

Prediction API

The Prediction API provides machine learning-based species identification and prediction services for mosquito classification.

Router Implementation

Prediction router for mosquito species identification.

This module provides FastAPI endpoints for predicting mosquito species from uploaded images using AI-powered classification. The router handles image validation, coordinates with the prediction service, and returns structured results.

Main Components
  • APIRouter instance configured for prediction endpoints
  • predict_species endpoint for species identification
The prediction system supports
  • Multiple image formats (JPEG, PNG, etc.)
  • Real-time species identification with confidence scores
  • Optional image saving for predicted results
  • Comprehensive error handling and logging
Example

from fastapi import FastAPI from backend.routers.prediction import router

app = FastAPI() app.include_router(router, prefix="/api/v1")

Now available at POST /api/v1/predict

prediction_service = <backend.services.prediction_service.PredictionService object at 0x00000274A9218790> module-attribute

Service for mosquito species prediction using the CulicidaeLab serve API.

This class provides a high-level interface for species identification from images. It is optimized for production use, leveraging an efficient inference backend with automatic model caching to ensure low latency.

Attributes:

Name Type Description
save_predicted_images_enabled bool

Whether to save predicted images.

model_id str

The identifier for the machine learning model being used.

Example

service = PredictionService() result, error = await service.predict_species(image_data, "test.jpg")

PredictionResult

Model for prediction results.

model_config = {} class-attribute

dict() -> new empty dictionary dict(mapping) -> new dictionary initialized from a mapping object's (key, value) pairs dict(iterable) -> new dictionary initialized as if via: d = {} for k, v in iterable: d[k] = v dict(**kwargs) -> new dictionary initialized with the name=value pairs in the keyword argument list. For example: dict(one=1, two=2)

predict_species(file: UploadFile = File(PydanticUndefined)) -> PredictionResult async

Predict mosquito species from an uploaded image.

This endpoint accepts an image file and uses AI-powered classification to identify the mosquito species. The prediction includes confidence scores and optional species information.

Parameters:

Name Type Description Default
file UploadFile

The image file to analyze. Must be a valid image format (JPEG, PNG, etc.). The file is validated for content type and non-empty content.

File(PydanticUndefined)

Returns:

Name Type Description
PredictionResult PredictionResult

A structured response containing: - id: Species identifier (lowercase with underscores) - scientific_name: The predicted species name - probabilities: Dictionary of species -> confidence scores - model_id: Identifier of the AI model used - confidence: Confidence score of the top prediction - image_url_species: URL to processed image (if saving enabled)

Raises:

Type Description
HTTPException

If the file is not an image (400 Bad Request)

HTTPException

If the file is empty (400 Bad Request)

HTTPException

If prediction fails (500 Internal Server Error)

Example

import requests

Using curl command:
curl -X POST "http://localhost:8000/predict" >>> # -H "accept: application/json" >>> # -H "Content-Type: multipart/form-data" >>> # -F "file=@mosquito_image.jpg"
Using Python requests:

response = requests.post( ... "http://localhost:8000/predict", ... files={"file": open("mosquito_image.jpg", "rb")} ... ) result = response.json() print(f"Predicted species: {result['scientific_name']}") print(f"Confidence: {result['confidence']:.2%}")

Source code in backend\routers\prediction.py
@router.post(
    "/predict",
    response_model=PredictionResult,
    summary="Predict mosquito species from image",
    description="Upload an image of a mosquito to identify its species using AI.",
)
async def predict_species(
    file: UploadFile = File(...),
) -> PredictionResult:
    """Predict mosquito species from an uploaded image.

    This endpoint accepts an image file and uses AI-powered classification
    to identify the mosquito species. The prediction includes confidence scores
    and optional species information.

    Args:
        file (UploadFile): The image file to analyze. Must be a valid image
            format (JPEG, PNG, etc.). The file is validated for content type
            and non-empty content.

    Returns:
        PredictionResult: A structured response containing:
            - id: Species identifier (lowercase with underscores)
            - scientific_name: The predicted species name
            - probabilities: Dictionary of species -> confidence scores
            - model_id: Identifier of the AI model used
            - confidence: Confidence score of the top prediction
            - image_url_species: URL to processed image (if saving enabled)

    Raises:
        HTTPException: If the file is not an image (400 Bad Request)
        HTTPException: If the file is empty (400 Bad Request)
        HTTPException: If prediction fails (500 Internal Server Error)

    Example:
        >>> import requests
        >>>
        >>> # Using curl command:
        >>> # curl -X POST "http://localhost:8000/predict" \
        >>> #      -H "accept: application/json" \
        >>> #      -H "Content-Type: multipart/form-data" \
        >>> #      -F "file=@mosquito_image.jpg"
        >>>
        >>> # Using Python requests:
        >>> response = requests.post(
        ...     "http://localhost:8000/predict",
        ...     files={"file": open("mosquito_image.jpg", "rb")}
        ... )
        >>> result = response.json()
        >>> print(f"Predicted species: {result['scientific_name']}")
        >>> print(f"Confidence: {result['confidence']:.2%}")
    """
    print("\n--- [ROUTER] Received request for /predict ---")
    try:
        content_type = file.content_type
        print(f"[ROUTER] File received. Filename: '{file.filename}', Content-Type: '{content_type}'")

        if not content_type or not content_type.startswith("image/"):
            print("[ROUTER] ERROR: Invalid content type. Raising 400 Bad Request.")
            raise HTTPException(
                status_code=status.HTTP_400_BAD_REQUEST,
                detail=f"File must be an image, got {content_type}",
            )

        print("[ROUTER] Reading file contents...")
        contents = await file.read()
        if not contents:
            print("[ROUTER] ERROR: Empty file uploaded. Raising 400 Bad Request.")
            raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="Empty file")

        print("[ROUTER] File contents read ({len(contents)} bytes). Calling prediction_service...")
        result, error = await prediction_service.predict_species(contents, file.filename)
        print("[ROUTER] Prediction service returned. Result: {result is not None}, Error: '{error}'")

        if error:
            print("[ROUTER] ERROR: Prediction service returned an error. Raising 500 Internal Server Error.")
            raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail=f"Prediction failed: {error}")

        if not result:
            print("[ROUTER] ERROR: Prediction service returned no result and no error. Raising 500.")
            raise HTTPException(
                status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
                detail="Prediction failed with no specific error",
            )

        print(f"[ROUTER] Prediction successful. Returning result for '{result.scientific_name}'.")
        return result

    except HTTPException as http_exc:
        # Re-raise HTTPExceptions to let FastAPI handle them
        print(f"[ROUTER] Caught HTTPException: {http_exc.status_code} - {http_exc.detail}")
        raise
    except Exception as e:
        # Catch any other unexpected errors
        print(f"[ROUTER] CRITICAL ERROR in /predict: {type(e).__name__} - {str(e)}")
        raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail=f"Prediction failed: {str(e)}")

Data Schemas

The Prediction API uses Pydantic schemas for prediction request/response validation:

backend.schemas.prediction_schemas

Pydantic models for the Prediction service.

This module defines the schema models used for prediction results and model validation in API endpoints.

annotations = _Feature((3, 7, 0, 'beta', 1), None, 16777216) module-attribute
PredictionResult

Model for prediction results.

model_config = {} class-attribute

dict() -> new empty dictionary dict(mapping) -> new dictionary initialized from a mapping object's (key, value) pairs dict(iterable) -> new dictionary initialized as if via: d = {} for k, v in iterable: d[k] = v dict(**kwargs) -> new dictionary initialized with the name=value pairs in the keyword argument list. For example: dict(one=1, two=2)

Service Layer

The Prediction API integrates with machine learning service layers:

backend.services.prediction_service

Machine learning prediction service for mosquito species identification.

This module provides functionality for predicting mosquito species from images using a trained MosquitoClassifier model. It handles model loading, image processing, and prediction with confidence scoring and species identification.

Example

from backend.services.prediction_service import prediction_service result, error = await prediction_service.predict_species(image_data, "mosquito.jpg") if result: ... print(f"Predicted species: {result.scientific_name}")

annotations = _Feature((3, 7, 0, 'beta', 1), None, 16777216) module-attribute
app_settings = AppSettings(APP_NAME='CulicidaeLab API', API_V1_STR='/api', DATABASE_PATH='backend/data/.lancedb', SAVE_PREDICTED_IMAGES='1', BACKEND_CORS_ORIGINS=['http://localhost:8765', 'http://127.0.0.1:8765']) module-attribute

Application settings configuration for CulicidaeLab API backend.

This class defines all configuration parameters for the CulicidaeLab API server, using Pydantic BaseSettings for environment variable support and validation. Settings can be overridden via environment variables with the CULICIDAELAB_ prefix.

Attributes:

Name Type Description
APP_NAME str

Name of the application displayed in API documentation.

API_V1_STR str

Base path prefix for API version 1 endpoints.

DATABASE_PATH str

File system path to the LanceDB database directory.

SAVE_PREDICTED_IMAGES str | bool

Whether to save predicted images to disk.

BACKEND_CORS_ORIGINS list[str]

List of allowed CORS origins for frontend access.

Example

settings = AppSettings() print(f"App name: {settings.APP_NAME}") App name: CulicidaeLab API print(f"Database path: {settings.DATABASE_PATH}") Database path: .lancedb

prediction_service = <backend.services.prediction_service.PredictionService object at 0x00000274A9218790> module-attribute

Service for mosquito species prediction using the CulicidaeLab serve API.

This class provides a high-level interface for species identification from images. It is optimized for production use, leveraging an efficient inference backend with automatic model caching to ensure low latency.

Attributes:

Name Type Description
save_predicted_images_enabled bool

Whether to save predicted images.

model_id str

The identifier for the machine learning model being used.

Example

service = PredictionService() result, error = await service.predict_species(image_data, "test.jpg")

PredictionService()

Service for mosquito species prediction using the CulicidaeLab serve API.

This class provides a high-level interface for species identification from images. It is optimized for production use, leveraging an efficient inference backend with automatic model caching to ensure low latency.

Attributes:

Name Type Description
save_predicted_images_enabled bool

Whether to save predicted images.

model_id str

The identifier for the machine learning model being used.

Example

service = PredictionService() result, error = await service.predict_species(image_data, "test.jpg")

Initialize the PredictionService and retrieve the model configuration.

Sets up the service based on application settings and fetches the model architecture information from the CulicidaeLab library settings to generate a descriptive model ID.

save_predicted_image(image_data: bytes, filename: str, quiet: bool = True)

Asynchronously save the predicted image in multiple sizes.

This method saves the original image along with resized versions (224x224 and 100x100) to the static images directory. Failures are handled silently to not disrupt the prediction flow unless quiet mode is disabled.

Parameters:

Name Type Description Default
image_data bytes

The raw image data to save.

required
filename str

The filename to use for the saved images.

required
quiet bool

If False, exceptions will be raised instead of being handled silently. Defaults to True.

True

Raises:

Type Description
Exception

If quiet=False and an error occurs during image saving, the original exception is re-raised.

Example

service = PredictionService() await service.save_predicted_image(image_bytes, "mosquito_001.jpg")

predict_species(image_data: bytes, filename: str) -> tuple[PredictionResult | None, str | None]

Predict mosquito species from image data using the serve API.

This method processes image data using the high-performance serve function. It translates the library's output into the backend's PredictionResult schema, including the correct model ID.

Parameters:

Name Type Description Default
image_data bytes

The raw image data (e.g., JPEG, PNG).

required
filename str

The original filename of the image.

required

Returns:

Type Description
PredictionResult | None

A tuple containing the PredictionResult or None, and an error

str | None

message or None.

PredictionResult

Model for prediction results.

model_config = {} class-attribute

dict() -> new empty dictionary dict(mapping) -> new dictionary initialized from a mapping object's (key, value) pairs dict(iterable) -> new dictionary initialized as if via: d = {} for k, v in iterable: d[k] = v dict(**kwargs) -> new dictionary initialized with the name=value pairs in the keyword argument list. For example: dict(one=1, two=2)

Example Usage

Species Prediction

import httpx

async with httpx.AsyncClient() as client:
    # Upload image for species prediction
    with open("mosquito_image.jpg", "rb") as image_file:
        files = {"image": image_file}
        response = await client.post(
            "http://localhost:8000/api/v1/predict",
            files=files,
            data={"confidence_threshold": 0.7}
        )
        prediction = response.json()

    print(f"Predicted species: {prediction['species']}")
    print(f"Confidence: {prediction['confidence']}")