Header menu logo FsCDK

Backup and Disaster Recovery on AWS

This guide covers backup strategies, disaster recovery patterns, and business continuity planning for AWS infrastructure deployed with FsCDK. Based on the AWS Well-Architected Reliability Pillar and battle-tested patterns from AWS Solutions Architects.

Understanding Recovery Objectives

Before implementing any backup strategy, define your Recovery Point Objective (RPO) and Recovery Time Objective (RTO). These metrics determine your architecture and costs.

Recovery Point Objective (RPO): Maximum acceptable data loss measured in time. If your RPO is 1 hour, you can lose at most 1 hour of data.

Recovery Time Objective (RTO): Maximum acceptable downtime. If your RTO is 4 hours, you must restore operations within 4 hours of an incident.

Common RPO/RTO Requirements by Industry

Industry

Typical RPO

Typical RTO

Compliance Drivers

Financial Services

< 1 hour < 4 hours

SOX, PCI DSS

Healthcare

< 4 hours < 8 hours

HIPAA

E-commerce

< 15 minutes < 1 hour

Revenue impact

SaaS Applications

< 1 hour < 4 hours

SLA commitments

Media/Content

< 24 hours < 24 hours

Business tolerance

Reference: AWS Disaster Recovery Whitepaper (https://docs.aws.amazon.com/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-options-in-the-cloud.html)

RDS Automated Backups

RDS provides automated backups with point-in-time recovery. This is the foundation for database disaster recovery.

#r "../src/bin/Release/net8.0/publish/Amazon.JSII.Runtime.dll"
#r "../src/bin/Release/net8.0/publish/Constructs.dll"
#r "../src/bin/Release/net8.0/publish/Amazon.CDK.Lib.dll"
#r "../src/bin/Release/net8.0/publish/FsCDK.dll"

open FsCDK
open Amazon.CDK
open Amazon.CDK.AWS.RDS
open Amazon.CDK.AWS.EC2
open Amazon.CDK.AWS.DynamoDB
open Amazon.CDK.AWS.CloudWatch
open Amazon.CDK.AWS.S3
open Amazon.CDK.AWS.Lambda

Production Database with Automated Backups

RDS automatically takes continuous backups, enabling restoration to any point within the backup retention window (1-35 days).

stack "ProductionBackupStrategy" {
    env (environment { region "us-east-1" })

    description "Production database with automated backups"

    // VPC for database
    let! prodVpc =
        vpc "ProductionVpc" {
            maxAzs 2
            natGateways 1
        }

    // Production database with maximum backup retention
    rdsInstance "ProductionDB" {
        vpc prodVpc
        postgresEngine
        instanceType (InstanceType.Of(InstanceClass.MEMORY5, InstanceSize.LARGE))
        allocatedStorage 100

        // Backup configuration
        backupRetentionDays 35.0 // Maximum retention for PITR
        preferredBackupWindow "03:00-04:00" // 3-4 AM UTC

        // High availability
        multiAz true

        // Security
        deletionProtection true
        storageEncrypted true

        // Monitoring
        enablePerformanceInsights true
    }
}

Compliance-Driven Retention Policies

Different compliance frameworks mandate specific retention periods. Configure your backup plans accordingly.

PCI DSS Requirements: - Retain backups for at least 3 months - Quarterly backups for 1 year - Reference: PCI DSS Requirement 3.1

HIPAA Requirements: - Retain backups for 6 years minimum - Ensure encryption at rest and in transit - Reference: 45 CFR ยง164.308(a)(7)(ii)(A)

SOX Requirements: - Retain financial data backups for 7 years - Ensure immutability and audit trails - Reference: Sarbanes-Oxley Section 802

For long-term compliance retention beyond 35 days, use RDS snapshots exported to S3 with lifecycle policies.

stack "ComplianceBackups" {
    env (environment { region "us-east-1" })

    // Compliance database with automated snapshots
    let! compVpc = vpc "ComplianceVpc" { maxAzs 2 }

    rdsInstance "ComplianceDB" {
        vpc compVpc
        postgresEngine
        backupRetentionDays 35.0
        deletionProtection true
        storageEncrypted true
    }

    // S3 bucket for long-term snapshot storage
    bucket "long-term-backup-storage" {
        versioned true
        encryption BucketEncryption.KMS_MANAGED

        // Lifecycle policy for cost optimization
        lifecycleRule {
            enabled true

            // Transition to Glacier after 90 days
            transitions
                [ transition {
                      storageClass StorageClass.GLACIER
                      transitionAfter (Duration.Days 90.0)
                  } ]

            // Delete after 7 years (SOX compliance)
            expiration (Duration.Days 2555.0)
        }
    }
}

Reference: Export RDS snapshots to S3 (https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ExportSnapshot.html)

Point-in-Time Recovery for Databases

RDS Point-in-Time Recovery

RDS PITR allows restoration to any second within the retention period. This is critical for recovering from data corruption or accidental deletions.

stack "PITRDatabase" {
    let! pitrVpc = vpc "PITRVpc" { maxAzs 2 }

    rdsInstance "PITRDatabase" {
        vpc pitrVpc
        postgresEngine
        backupRetentionDays 35.0 // Maximum PITR retention
        multiAz true
        deletionProtection true
        enablePerformanceInsights true
    }
}

DynamoDB Point-in-Time Recovery

DynamoDB PITR provides continuous backups for 35 days. This feature is enabled by default in FsCDK production defaults.

stack "DynamoDBBackup" {
    table "TransactionalData" {
        partitionKey "id" AttributeType.STRING
        sortKey "timestamp" AttributeType.NUMBER
        billingMode BillingMode.PAY_PER_REQUEST

        // PITR enabled by default in FsCDK
        pointInTimeRecovery true

        // Enable streams for replication
        stream StreamViewType.NEW_AND_OLD_IMAGES
    }
}

Reference: DynamoDB Point-in-Time Recovery (https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/PointInTimeRecovery.html)

Cross-Region Replication for Disaster Recovery

For mission-critical workloads, implement cross-region replication to survive regional failures.

RDS Cross-Region Read Replicas

Create read replicas in secondary regions that can be promoted during a disaster. According to AWS, cross-region replicas typically have 1-5 second replication lag.

stack "MultiRegionDatabase" {
    env (environment { region "us-east-1" })

    description "Primary database with cross-region DR"

    let! primaryVpc =
        vpc "PrimaryVpc" {
            maxAzs 3
            natGateways 2
        }

    // Primary database
    rdsInstance "PrimaryDB" {
        vpc primaryVpc
        postgresEngine
        instanceType (InstanceType.Of(InstanceClass.MEMORY5, InstanceSize.XLARGE))
        multiAz true
        backupRetentionDays 30.0
        enablePerformanceInsights true
        deletionProtection true
    }
}

For the DR region, deploy a separate stack:

stack "DRDatabase" {
    environment {
        region "us-west-2"  // DR region
    }

    let drVpc = vpc "DRVpc" { maxAzs 3 }

    // Create read replica from primary (via AWS Console or CLI)
    // aws rds create-db-instance-read-replica \
    //   --db-instance-identifier dr-replica \
    //   --source-db-instance-identifier arn:aws:rds:us-east-1:account:db:primary-db \
    //   --region us-west-2
}

Reference: AWS RDS Best Practices (https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_BestPractices.html)

DynamoDB Global Tables

DynamoDB Global Tables provide automatic multi-region replication with typical latency under 1 second.

stack "GlobalDynamoDB" {
    env (environment { region "us-east-1" })

    table "GlobalUserData" {
        partitionKey "userId" AttributeType.STRING
        billingMode BillingMode.PAY_PER_REQUEST

        // Enable streams for global table replication
        stream StreamViewType.NEW_AND_OLD_IMAGES

        pointInTimeRecovery true
    }
}

After deploying the table, enable global replication via AWS CLI:

aws dynamodb create-global-table \
    --global-table-name GlobalUserData \
    --replication-group RegionName=us-east-1 \
    --replication-group RegionName=us-west-2 \
    --replication-group RegionName=eu-west-1

Reference: DynamoDB Global Tables (https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GlobalTables.html)

S3 Cross-Region Replication

Replicate S3 buckets across regions for disaster recovery. S3 CRR typically replicates objects within 15 minutes.

stack "S3Replication" {
    // Source bucket with versioning (required for replication)
    bucket "source-assets" {
        versioned true
        encryption BucketEncryption.S3_MANAGED
    }

    // Destination bucket in DR region (deploy separately)
    bucket "dr-assets-replica" {
        versioned true
        encryption BucketEncryption.S3_MANAGED
    }
}

Configure replication using AWS CLI after deployment:

# Create replication role
aws iam create-role --role-name s3-replication-role \
    --assume-role-policy-document file://trust-policy.json

# Attach replication policy
aws iam put-role-policy --role-name s3-replication-role \
    --policy-name replication-policy \
    --policy-document file://replication-policy.json

# Enable replication
aws s3api put-bucket-replication --bucket source-assets \
    --replication-configuration file://replication-config.json

Reference: S3 Replication (https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication.html)

Disaster Recovery Patterns

AWS defines four disaster recovery strategies, each with different costs and complexity.

Pattern 1: Backup and Restore (RPO: hours, RTO: 24+ hours)

Lowest cost option. Take periodic backups and restore when needed.

Cost: Low (backup storage only, ~$0.05/GB/month) Complexity: Low Best for: Development, non-critical workloads

Implementation: Use RDS automated backups and DynamoDB on-demand backups.

Reference: Werner Vogels (AWS CTO) - "Building Resilient Applications" (https://www.allthingsdistributed.com/2020/11/building-resilient-applications.html)

Pattern 2: Pilot Light (RPO: minutes, RTO: hours)

Maintain minimal version of environment running in DR region. Core infrastructure always on, but scaled down.

Cost: Medium (minimal compute running continuously) Complexity: Medium Best for: Standard production workloads

stack "PilotLight" {
    env (environment { region "us-east-1" })

    description "Pilot light infrastructure - core services minimal"

    let! pilotVpc = vpc "PilotVpc" { maxAzs 2 }

    // Minimal database that can be scaled up
    rdsInstance "PilotDB" {
        vpc pilotVpc
        postgresEngine
        instanceType (InstanceType.Of(InstanceClass.BURSTABLE3, InstanceSize.SMALL)) // Minimal size
        allocatedStorage 20
        maxAllocatedStorage 1000 // Can auto-scale
        multiAz false // Single AZ to save cost
        backupRetentionDays 7.0
    }
}

During disaster, scale up the instance class to production size.

Pattern 3: Warm Standby (RPO: seconds, RTO: minutes)

Scaled-down but fully functional version runs in DR region.

Cost: Medium-High (continuous smaller environment) Complexity: Medium-High Best for: Business-critical workloads

stack "WarmStandby" {
    env (
        environment {
            region "us-west-2" // DR region
        }
    )

    description "Warm standby - scaled down production environment"

    let! warmVpc = vpc "WarmStandbyVpc" { maxAzs 2 }

    // Scaled down but fully functional
    rdsInstance "WarmStandbyDB" {
        vpc warmVpc
        postgresEngine
        instanceType (InstanceType.Of(InstanceClass.MEMORY5, InstanceSize.LARGE)) // 50% of prod
        multiAz true
        backupRetentionDays 30.0
    }

    // Minimal Lambda capacity
    lambda "WarmStandbyAPI" {
        runtime Runtime.DOTNET_8
        handler "Handler::process"
        code "./publish"
        memorySize 512
        reservedConcurrentExecutions 5 // Minimal capacity
    }
}

Pattern 4: Hot Standby/Active-Active (RPO: near-zero, RTO: automatic)

Full environment runs in multiple regions simultaneously.

Cost: High (full duplicate infrastructure) Complexity: High Best for: Mission-critical, 99.99%+ SLA requirements

Used by Netflix, Airbnb, and other companies requiring five-nines availability.

stack "HotStandbyPrimary" {
    env (environment { region "us-east-1" })

    description "Active-Active primary region"

    let! primaryVpc = vpc "PrimaryVpc" { maxAzs 3 }

    rdsInstance "PrimaryDB" {
        vpc primaryVpc
        postgresEngine
        instanceType (InstanceType.Of(InstanceClass.MEMORY5, InstanceSize.XLARGE))
        multiAz true
        backupRetentionDays 30.0
    }

    lambda "PrimaryAPI" {
        runtime Runtime.DOTNET_8
        handler "Handler::process"
        code "./publish"
        memorySize 1024
        reservedConcurrentExecutions 100
    }
}

stack "HotStandbySecondary" {
    env (environment { region "us-west-2" })

    description "Active-Active secondary region"

    let! secondaryVpc = vpc "SecondaryVpc" { maxAzs 3 }

    rdsInstance "SecondaryDB" {
        vpc secondaryVpc
        postgresEngine
        instanceType (InstanceType.Of(InstanceClass.MEMORY5, InstanceSize.XLARGE))
        multiAz true
        backupRetentionDays 30.0
    }

    lambda "SecondaryAPI" {
        runtime Runtime.DOTNET_8
        handler "Handler::process"
        code "./publish"
        memorySize 1024
        reservedConcurrentExecutions 100
    }
}

Reference: Adrian Cockcroft (Netflix) - "Migrating to Microservices" (https://www.nginx.com/blog/microservices-at-netflix-architectural-best-practices/)

Monitoring and Alerting

Set up CloudWatch alarms for backup and replication failures.

stack "BackupMonitoring" {
    // Alarm for RDS backup failures
    cloudwatchAlarm "RDSBackupFailure" {
        metricName "BackupRetentionPeriodStorageUsed"
        metricNamespace "AWS/RDS"
        threshold 0.0
        evaluationPeriods 1
        statistic "Average"
        comparisonOperator ComparisonOperator.LESS_THAN_OR_EQUAL_TO_THRESHOLD
        treatMissingData TreatMissingData.BREACHING
    }

    // Alarm for DynamoDB replication lag (for Global Tables)
    cloudwatchAlarm "DynamoDBReplicationLag" {
        metricName "ReplicationLatency"
        metricNamespace "AWS/DynamoDB"
        threshold 60000.0 // 60 seconds
        evaluationPeriods 2
        statistic "Average"
        comparisonOperator ComparisonOperator.GREATER_THAN_THRESHOLD
        treatMissingData TreatMissingData.NOT_BREACHING
    }
}

Testing Disaster Recovery

AWS recommends testing DR procedures quarterly. According to the AWS Reliability Pillar, untested DR plans fail 30-40% of the time during actual disasters.

DR Testing Checklist

  1. Backup Verification: Restore from backup to test environment monthly
  2. Failover Testing: Switch to DR region quarterly
  3. Data Integrity: Verify restored data matches production
  4. RTO Measurement: Time the complete recovery process
  5. Documentation: Update runbooks based on test results

Reference: AWS Well-Architected Reliability Pillar - REL13 (https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/test-reliability.html)

Automated DR Testing with AWS Fault Injection Simulator

AWS FIS allows testing failover scenarios without impacting production:

Cost Considerations

Backup and DR costs vary significantly based on strategy:

Strategy

Monthly Cost (Example)

Use Case

Backup Only

$50-200

Development, non-critical

Pilot Light

$200-1000

Standard production

Warm Standby

$1000-5000

Business-critical

Hot Standby

$5000-20000+

Mission-critical, 99.99%+ SLA

Cost Optimization Tips:

  1. Use S3 Glacier for long-term archive (90+ days) - 80% cheaper than standard storage
  2. Enable lifecycle policies to transition old backups automatically
  3. Use cross-region replication only for critical data
  4. Test restore times before committing to expensive active-active
  5. Use Aurora Global Database instead of RDS for faster replication

Reference: AWS Cost Optimization Pillar (https://docs.aws.amazon.com/wellarchitected/latest/cost-optimization-pillar/welcome.html)

Compliance Mapping

Common compliance frameworks and their DR requirements. For comprehensive governance controls and compliance automation, see the Governance and Compliance with AWS Organizations guide.

PCI DSS

HIPAA

SOX

ISO 27001

Real-World Case Study: AWS US-EAST-1 Outage

US-EAST-1 Outage (December 2021)

Key Takeaway: Do not rely on single region for production workloads. US-EAST-1 is the largest region but has had several multi-hour outages.

Reference: AWS Post-Event Summaries (https://aws.amazon.com/message/12721/)

Additional Resources

AWS Official Documentation:

AWS Whitepapers:

Community Resources:

Books:


This guide reflects AWS best practices as of 2025. Always refer to the latest AWS documentation and your organization's compliance requirements when implementing disaster recovery strategies.

namespace FsCDK
namespace Amazon
namespace Amazon.CDK
namespace Amazon.CDK.AWS
namespace Amazon.CDK.AWS.RDS
namespace Amazon.CDK.AWS.EC2
namespace Amazon.CDK.AWS.DynamoDB
namespace Amazon.CDK.AWS.CloudWatch
namespace Amazon.CDK.AWS.S3
namespace Amazon.CDK.AWS.Lambda
val stack: name: string -> StackBuilder
<summary>Creates an AWS CDK Stack construct.</summary>
<param name="name">The name of the stack.</param>
<code lang="fsharp"> stack "MyStack" { lambda myFunction bucket myBucket } </code>
custom operation: env (IEnvironment) Calls StackBuilder.Env
val environment: EnvironmentBuilder
<summary>Creates an AWS CDK Environment configuration.</summary>
<code lang="fsharp"> environment { account "123456789012" region "us-west-2" } </code>
custom operation: region (string) Calls EnvironmentBuilder.Region
<summary>Sets the AWS region for the environment.</summary>
<param name="config">The current configuration.</param>
<param name="regionName">The AWS region name.</param>
<code lang="fsharp"> environment { region "us-west-2" } </code>
custom operation: description (string) Calls StackBuilder.Description
<summary>Sets the stack description.</summary>
<param name="config">The current stack configuration.</param>
<param name="desc">A description of the stack.</param>
<code lang="fsharp"> stack "MyStack" { description "My application stack" } </code>
val prodVpc: IVpc
val vpc: name: string -> VpcBuilder
<summary>Creates a VPC configuration with AWS best practices.</summary>
<param name="name">The VPC name.</param>
<code lang="fsharp"> vpc "MyVpc" { maxAzs 2 natGateways 1 cidr "10.0.0.0/16" } </code>
custom operation: maxAzs (int) Calls VpcBuilder.MaxAzs
<summary>Sets the maximum number of Availability Zones to use.</summary>
<param name="config">The current VPC configuration.</param>
<param name="maxAzs">The maximum number of AZs (default: 2 for HA).</param>
<code lang="fsharp"> vpc "MyVpc" { maxAzs 3 } </code>
custom operation: natGateways (int) Calls VpcBuilder.NatGateways
<summary>Sets the number of NAT Gateways.</summary>
<param name="config">The current VPC configuration.</param>
<param name="natGateways">The number of NAT gateways (default: 1 for cost optimization).</param>
<code lang="fsharp"> vpc "MyVpc" { natGateways 2 } </code>
val rdsInstance: name: string -> DatabaseInstanceBuilder
<summary>Creates an RDS Database Instance with AWS best practices.</summary>
<param name="name">The database instance name.</param>
<code lang="fsharp"> rdsInstance "MyDatabase" { vpc myVpc postgresEngine PostgresEngineVersion.VER_15 instanceType (InstanceType.Of(InstanceClass.BURSTABLE3, InstanceSize.SMALL)) multiAz true backupRetentionDays 7.0 } </code>
custom operation: vpc (IVpc) Calls DatabaseInstanceBuilder.Vpc
<summary>Sets the VPC.</summary>
custom operation: postgresEngine (PostgresEngineVersion option) Calls DatabaseInstanceBuilder.PostgresEngine
<summary>Sets PostgreSQL as the database engine with a specific version.</summary>
custom operation: instanceType (InstanceType) Calls DatabaseInstanceBuilder.InstanceType
<summary>Sets the instance type.</summary>
Multiple items
type InstanceType = inherit DeputyBase new: instanceTypeIdentifier: string -> unit member IsBurstable: unit -> bool member SameInstanceClassAs: other: InstanceType -> bool member ToString: unit -> string static member Of: instanceClass: InstanceClass * instanceSize: InstanceSize -> InstanceType member Architecture: InstanceArchitecture

--------------------
InstanceType(instanceTypeIdentifier: string) : InstanceType
InstanceType.Of(instanceClass: InstanceClass, instanceSize: InstanceSize) : InstanceType
[<Struct>] type InstanceClass = | STANDARD3 = 0 | M3 = 1 | STANDARD4 = 2 | M4 = 3 | STANDARD5 = 4 | M5 = 5 | STANDARD5_NVME_DRIVE = 6 | M5D = 7 | STANDARD5_AMD = 8 | M5A = 9 ...
field InstanceClass.MEMORY5: InstanceClass = 22
[<Struct>] type InstanceSize = | NANO = 0 | MICRO = 1 | SMALL = 2 | MEDIUM = 3 | LARGE = 4 | XLARGE = 5 | XLARGE2 = 6 | XLARGE3 = 7 | XLARGE4 = 8 | XLARGE6 = 9 ...
field InstanceSize.LARGE: InstanceSize = 4
custom operation: allocatedStorage (int) Calls DatabaseInstanceBuilder.AllocatedStorage
<summary>Sets the allocated storage in GB.</summary>
custom operation: backupRetentionDays (float) Calls DatabaseInstanceBuilder.BackupRetentionDays
<summary>Sets the backup retention period in days.</summary>
custom operation: preferredBackupWindow (string) Calls DatabaseInstanceBuilder.PreferredBackupWindow
<summary>Sets the preferred backup window.</summary>
custom operation: multiAz (bool) Calls DatabaseInstanceBuilder.MultiAz
<summary>Enables or disables Multi-AZ deployment.</summary>
custom operation: deletionProtection (bool) Calls DatabaseInstanceBuilder.DeletionProtection
<summary>Enables or disables deletion protection.</summary>
custom operation: storageEncrypted (bool) Calls DatabaseInstanceBuilder.StorageEncrypted
<summary>Enables storage encryption.</summary>
custom operation: enablePerformanceInsights (bool) Calls DatabaseInstanceBuilder.EnablePerformanceInsights
<summary>Enables performance insights.</summary>
val compVpc: IVpc
val bucket: name: string -> BucketBuilder
custom operation: versioned (bool) Calls BucketBuilder.Versioned
<summary> Enables or disables versioning for the S3 bucket. **Security Best Practice:** Enable versioning for: - Critical data that requires audit trails - Data subject to compliance requirements (HIPAA, SOC2, etc.) - Production buckets storing business data **Cost Consideration:** Versioning stores all versions of objects, increasing storage costs. Only disable for: - Temporary/cache buckets - Build artifacts with short lifecycle - Development/testing buckets **Default:** false (opt-in for cost optimization) </summary>
<param name="value">True to enable versioning, false to disable.</param>
<param name="config">The current bucket configuration.</param>
<code lang="fsharp"> bucket "production-data" { versioned true // Enable for production } bucket "cache-bucket" { versioned false // Disable for temp data } </code>
custom operation: encryption (BucketEncryption) Calls BucketBuilder.Encryption
[<Struct>] type BucketEncryption = | UNENCRYPTED = 0 | KMS_MANAGED = 1 | S3_MANAGED = 2 | KMS = 3 | DSSE_MANAGED = 4 | DSSE = 5
field BucketEncryption.KMS_MANAGED: BucketEncryption = 1
val lifecycleRule: LifecycleRuleBuilder
custom operation: enabled (bool) Calls LifecycleRuleBuilder.Enabled
custom operation: transitions (ITransition list) Calls LifecycleRuleBuilder.Transitions
val transition: TransitionBuilder
<summary> Creates an S3 lifecycle transition rule for moving objects to different storage classes. Transitions reduce storage costs by automatically moving objects to cheaper storage tiers. </summary>
<code lang="fsharp"> transition { storageClass StorageClass.GLACIER transitionAfter (Duration.Days(90.0)) } </code>
custom operation: storageClass (StorageClass) Calls TransitionBuilder.StorageClass
<summary> Sets the storage class to transition to. Common classes: GLACIER (low-cost archival), DEEP_ARCHIVE (lowest cost, rare access), INTELLIGENT_TIERING (automatic cost optimization), GLACIER_IR (instant retrieval). </summary>
<param name="storageClass">The target storage class.</param>
Multiple items
type StorageClass = inherit DeputyBase new: value: string -> unit member ToString: unit -> string member Value: string static member DEEP_ARCHIVE: StorageClass static member GLACIER: StorageClass static member GLACIER_INSTANT_RETRIEVAL: StorageClass static member INFREQUENT_ACCESS: StorageClass static member INTELLIGENT_TIERING: StorageClass static member ONE_ZONE_INFREQUENT_ACCESS: StorageClass

--------------------
StorageClass(value: string) : StorageClass
property StorageClass.GLACIER: StorageClass with get
custom operation: transitionAfter (Duration) Calls TransitionBuilder.TransitionAfter
<summary> Sets when objects transition after creation (use Duration.Days()). Example: transitionAfter (Duration.Days(90.0)) moves objects after 90 days. </summary>
<param name="duration">Time after object creation to transition.</param>
type Duration = inherit DeputyBase member FormatTokenToNumber: unit -> string member IsUnresolved: unit -> bool member Minus: rhs: Duration -> Duration member Plus: rhs: Duration -> Duration member ToDays: ?opts: ITimeConversionOptions -> float member ToHours: ?opts: ITimeConversionOptions -> float member ToHumanString: unit -> string member ToIsoString: unit -> string member ToMilliseconds: ?opts: ITimeConversionOptions -> float ...
Duration.Days(amount: float) : Duration
custom operation: expiration (Duration) Calls LifecycleRuleBuilder.Expiration
val pitrVpc: IVpc
val table: name: string -> TableBuilder
<summary>Creates a DynamoDB table configuration.</summary>
<param name="name">The table name.</param>
<code lang="fsharp"> table "MyTable" { partitionKey "id" AttributeType.STRING billingMode BillingMode.PAY_PER_REQUEST } </code>
custom operation: partitionKey (string) (AttributeType) Calls TableBuilder.PartitionKey
<summary>Sets the partition key for the table.</summary>
<param name="config">The current table configuration.</param>
<param name="name">The attribute name for the partition key.</param>
<param name="attrType">The attribute type (STRING, NUMBER, or BINARY).</param>
<code lang="fsharp"> table "MyTable" { partitionKey "id" AttributeType.STRING } </code>
[<Struct>] type AttributeType = | BINARY = 0 | NUMBER = 1 | STRING = 2
field AttributeType.STRING: AttributeType = 2
custom operation: sortKey (string) (AttributeType) Calls TableBuilder.SortKey
<summary>Sets the sort key for the table.</summary>
<param name="config">The current table configuration.</param>
<param name="name">The attribute name for the sort key.</param>
<param name="attrType">The attribute type (STRING, NUMBER, or BINARY).</param>
<code lang="fsharp"> table "MyTable" { partitionKey "userId" AttributeType.STRING sortKey "timestamp" AttributeType.NUMBER } </code>
field AttributeType.NUMBER: AttributeType = 1
custom operation: billingMode (BillingMode) Calls TableBuilder.BillingMode
<summary>Sets the billing mode for the table.</summary>
<param name="config">The current table configuration.</param>
<param name="mode">The billing mode (PAY_PER_REQUEST or PROVISIONED).</param>
<code lang="fsharp"> table "MyTable" { billingMode BillingMode.PAY_PER_REQUEST } </code>
[<Struct>] type BillingMode = | PAY_PER_REQUEST = 0 | PROVISIONED = 1
field BillingMode.PAY_PER_REQUEST: BillingMode = 0
custom operation: pointInTimeRecovery (bool) Calls TableBuilder.PointInTimeRecovery
<summary>Enables or disables point-in-time recovery.</summary>
<param name="config">The current table configuration.</param>
<param name="enabled">Whether point-in-time recovery is enabled.</param>
<code lang="fsharp"> table "MyTable" { pointInTimeRecovery true } </code>
custom operation: stream (StreamViewType) Calls TableBuilder.Stream
<summary>Enables DynamoDB Streams for the table.</summary>
<param name="config">The current table configuration.</param>
<param name="streamType">The stream view type (KEYS_ONLY, NEW_IMAGE, OLD_IMAGE, or NEW_AND_OLD_IMAGES).</param>
<code lang="fsharp"> table "MyTable" { stream StreamViewType.NEW_AND_OLD_IMAGES } </code>
[<Struct>] type StreamViewType = | NEW_IMAGE = 0 | OLD_IMAGE = 1 | NEW_AND_OLD_IMAGES = 2 | KEYS_ONLY = 3
field StreamViewType.NEW_AND_OLD_IMAGES: StreamViewType = 2
val primaryVpc: IVpc
field InstanceSize.XLARGE: InstanceSize = 5
field BucketEncryption.S3_MANAGED: BucketEncryption = 2
val pilotVpc: IVpc
field InstanceClass.BURSTABLE3: InstanceClass = 172
field InstanceSize.SMALL: InstanceSize = 2
custom operation: maxAllocatedStorage (int) Calls DatabaseInstanceBuilder.MaxAllocatedStorage
<summary>Sets the maximum allocated storage in GB for autoscaling.</summary>
val warmVpc: IVpc
val lambda: name: string -> FunctionBuilder
<summary>Creates a Lambda function configuration.</summary>
<param name="name">The function name.</param>
<code lang="fsharp"> lambda "MyFunction" { handler "index.handler" runtime Runtime.NODEJS_18_X code "./lambda" timeout 30.0 } </code>
custom operation: runtime (Runtime) Calls FunctionBuilder.Runtime
<summary>Sets the runtime for the Lambda function.</summary>
<param name="config">The function configuration.</param>
<param name="runtime">The Lambda runtime.</param>
<code lang="fsharp"> lambda "MyFunction" { runtime Runtime.NODEJS_18_X } </code>
Multiple items
type Runtime = inherit DeputyBase new: name: string * ?family: Nullable<RuntimeFamily> * ?props: ILambdaRuntimeProps -> unit member RuntimeEquals: other: Runtime -> bool member ToString: unit -> string member BundlingImage: DockerImage member Family: Nullable<RuntimeFamily> member IsVariable: bool member Name: string member SupportsCodeGuruProfiling: bool member SupportsInlineCode: bool ...

--------------------
Runtime(name: string, ?family: System.Nullable<RuntimeFamily>, ?props: ILambdaRuntimeProps) : Runtime
property Runtime.DOTNET_8: Runtime with get
custom operation: handler (string) Calls FunctionBuilder.Handler
<summary>Sets the handler for the Lambda function.</summary>
<param name="config">The function configuration.</param>
<param name="handler">The handler name (e.g., "index.handler").</param>
<code lang="fsharp"> lambda "MyFunction" { handler "index.handler" } </code>
custom operation: code (Code) Calls FunctionBuilder.Code
<summary>Sets the code source from a Code object.</summary>
<param name="config">The function configuration.</param>
<param name="path">The Code object.</param>
<code lang="fsharp"> lambda "MyFunction" { code (Code.FromBucket myBucket "lambda.zip") } </code>
custom operation: memorySize (int) Calls FunctionBuilder.MemorySize
<summary>Sets the memory allocation for the Lambda function.</summary>
<param name="config">The function configuration.</param>
<param name="mb">The memory size in megabytes.</param>
<code lang="fsharp"> lambda "MyFunction" { memory 512 } </code>
custom operation: reservedConcurrentExecutions (int) Calls FunctionBuilder.ReservedConcurrentExecutions
<summary>Sets reserved concurrent executions for the function.</summary>
<param name="config">The function configuration.</param>
<param name="value">Reserved concurrency value.</param>
<code lang="fsharp"> lambda "MyFunction" { reservedConcurrentExecutions 50 } </code>
val secondaryVpc: IVpc
val cloudwatchAlarm: name: string -> CloudWatchAlarmBuilder
custom operation: metricName (string) Calls CloudWatchAlarmBuilder.MetricName
<summary>Sets the metric name (e.g., "Errors", "CPUUtilization").</summary>
custom operation: metricNamespace (string) Calls CloudWatchAlarmBuilder.MetricNamespace
<summary>Sets the CloudWatch metric namespace (e.g., "AWS/Lambda", "AWS/RDS").</summary>
custom operation: threshold (float) Calls CloudWatchAlarmBuilder.Threshold
<summary>Sets the alarm threshold value.</summary>
custom operation: evaluationPeriods (int) Calls CloudWatchAlarmBuilder.EvaluationPeriods
<summary>Sets the number of periods to evaluate.</summary>
custom operation: statistic (string) Calls CloudWatchAlarmBuilder.Statistic
<summary>Sets the statistic (Average, Sum, Minimum, Maximum, SampleCount).</summary>
custom operation: comparisonOperator (ComparisonOperator) Calls CloudWatchAlarmBuilder.ComparisonOperator
<summary>Sets the comparison operator.</summary>
[<Struct>] type ComparisonOperator = | GREATER_THAN_OR_EQUAL_TO_THRESHOLD = 0 | GREATER_THAN_THRESHOLD = 1 | LESS_THAN_THRESHOLD = 2 | LESS_THAN_OR_EQUAL_TO_THRESHOLD = 3 | LESS_THAN_LOWER_OR_GREATER_THAN_UPPER_THRESHOLD = 4 | GREATER_THAN_UPPER_THRESHOLD = 5 | LESS_THAN_LOWER_THRESHOLD = 6
field ComparisonOperator.LESS_THAN_OR_EQUAL_TO_THRESHOLD: ComparisonOperator = 3
custom operation: treatMissingData (TreatMissingData) Calls CloudWatchAlarmBuilder.TreatMissingData
<summary>Sets how to treat missing data.</summary>
[<Struct>] type TreatMissingData = | BREACHING = 0 | NOT_BREACHING = 1 | IGNORE = 2 | MISSING = 3
field TreatMissingData.BREACHING: TreatMissingData = 0
field ComparisonOperator.GREATER_THAN_THRESHOLD: ComparisonOperator = 1
field TreatMissingData.NOT_BREACHING: TreatMissingData = 1

Type something to start searching.