A Mule API is designed to retrieve product catalog information. The backend system that holds this data is slow, and the data only changes once every 24 hours during a nightly batch update. The API is deployed to three CloudHub workers and receives a high volume of requests for the same product data repeatedly. The primary goals are to dramatically improve API response time and reduce the load on the legacy backend. Which caching strategy should be implemented to meet these requirements effectively? ```mermaid graph TD subgraph CloudHub VPC LB[Load Balancer] --> W1[Worker 1] LB --> W2[Worker 2] LB --> W3[Worker 3] end subgraph Backend LegacyDB[(Legacy DB)] end W1 -->|Cache Miss| LegacyDB W2 -->|Cache Miss| LegacyDB W3 -->|Cache Miss| LegacyDB W1 SharedCache[(Shared Cache)] W2 SharedCache W3 SharedCache Client((Client)) --> LB ```