Skip to content

memourbina/spring-boot-redis-cache

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

26 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Spring Boot Redis Cache

This demonstrates spring boot redis cache. The main directory caches postgresql and the subdirectory cassandra similarily caches cassandra.

Context:

Links:

Getting Started

In this project, I used Redis for caching with Spring Boot. There are multiple docker containers: postgres, redis, redisinsight, and the spring boot application. The subdirectory cassandra uses a cassandra container instead of postgres When you send any request to get all customers or customer by id, you will wait 3 milliseconds if Redis has no related data.

Maven Dependencies

I changed this from lettuce to jedis

<dependency>	<groupId>org.springframework.boot</groupId>	<artifactId>spring-boot-starter-data-redis</artifactId> </dependency> 

Redis Configuration

@Configuration @AutoConfigureAfter(RedisAutoConfiguration.class) @Slf4j @EnableCaching public class RedisConfig { @Value("${spring.redis.host}") private String redisHost; @Value("${spring.redis.port}") private int redisPort; @Value("${spring.cache.redis.time-to-live}") private int cacheTtl; @Value("${spring.cache.redis.cache-null-values}") private boolean cacheNull; @Bean public RedisTemplate<String, Serializable> redisCacheTemplate(LettuceConnectionFactory redisConnectionFactory) { RedisTemplate<String, Serializable> template = new RedisTemplate<>(); template.setKeySerializer(new StringRedisSerializer()); template.setValueSerializer(new GenericJackson2JsonRedisSerializer()); template.setConnectionFactory(redisConnectionFactory); log.info("redis host " + redisHost); log.info("redis port " + String.valueOf(redisPort)); return template; } @Bean public CacheManager cacheManager(RedisConnectionFactory factory) { RedisCacheConfiguration config = RedisCacheConfiguration.defaultCacheConfig(); RedisCacheConfiguration redisCacheConfiguration = config .entryTtl(Duration.ofMinutes(cacheTtl)) .serializeKeysWith( RedisSerializationContext.SerializationPair.fromSerializer(new StringRedisSerializer())) .serializeValuesWith(RedisSerializationContext.SerializationPair .fromSerializer(new GenericJackson2JsonRedisSerializer())); if (cacheNull) { redisCacheConfiguration.getAllowCacheNullValues(); } else { redisCacheConfiguration.disableCachingNullValues(); } RedisCacheManager redisCacheManager = RedisCacheManager.builder(factory).cacheDefaults(redisCacheConfiguration) .build(); return redisCacheManager; } }

Spring Service

Spring Boot Customer Service Implementation will be like below class. I used Spring Boot Cache @Annotations for caching.

These are:

  • @Cacheable
  • @CacheEvict
  • @Caching
  • @CachceConfig

Updated this code on the CacheEvict as it did not work. Stackoverflow link Next, I changed this to do a cacheput on add

 @Cacheable @Override public List<Customer> getAll() { waitSomeTime(); return this.customerRepository.findAll();	} // @CacheEvict(key = "#id", condition = "#id!=null") // Switching to a CachePut from a CacheEvict @CachePut(key = "#customer.id") @Override public Customer add(Customer customer) { log.info(" write to database"); return this.customerRepository.save(customer);	} // this causes all the entries to be deleted if any entries are updated // @CacheEvict(cacheNames = "customers", allEntries = true) // this works but is kind of complex. Here customer is the java class object (not customers) // @CacheEvict(key="#customer?.id", condition="#customer?.id!=null") // this seems logical, but it doesn't delete the redis cached record // @CacheEvict(cacheNames = "customers", key = "#id", condition = "#id!=null") @CachePut(key = "#customer.id") @Override public Customer update(Customer customer) { Optional<Customer> optCustomer = this.customerRepository.findById(customer.getId()); if (!optCustomer.isPresent()) return null; Customer repCustomer = optCustomer.get(); repCustomer.setName(customer.getName()); repCustomer.setContactName(customer.getContactName()); repCustomer.setAddress(customer.getAddress()); repCustomer.setCity(customer.getCity()); repCustomer.setPostalCode(customer.getPostalCode()); repCustomer.setCountry(customer.getCountry()); return this.customerRepository.save(repCustomer);	} @CacheEvict(allEntries = true) @Override public void evictCache() { log.info("all entries have been evicted");	} @Caching(evict = { @CacheEvict(key = "#id", condition = "#id!=null")}) @Override public void delete(long id) { if(this.customerRepository.existsById(id)) { this.customerRepository.deleteById(id);	}	} @Cacheable(key = "#id", unless = "#result == null") @Override public Customer getCustomerById(long id) { waitSomeTime(); return this.customerRepository.findById(id).orElse(null);	}

Docker & Docker Compose

Dockerfile

FROM maven:3.8.6-openjdk-18 AS build COPY src /usr/src/app/src COPY pom.xml /usr/src/app RUN mvn -f /usr/src/app/pom.xml clean package -DskipTests FROM openjdk:18 ENV DEBIAN_FRONTEND noninteractive COPY --from=build /usr/src/app/target/spring-boot-redis-cache-0.0.1-SNAPSHOT.jar /usr/app/spring-boot-redis-cache-0.0.1-SNAPSHOT.jar COPY --from=build /usr/src/app/src/main/resources/runApplication.sh /usr/app/runApplication.sh EXPOSE 8080 ENTRYPOINT ["/usr/app/runApplication.sh"] 

Docker compose file

docker-compose.yml

version: '3.9' services: db: image: postgres container_name: db ports: - '5432:5432' environment: - POSTGRES_DB=postgres - POSTGRES_USER=postgres - POSTGRES_PASSWORD=ekoloji cache: image: redis/redis-stack:latest container_name: cache ports: - '6379:6379' environment: - ALLOW_EMPTY_PASSWORD=yes - REDIS_DISABLE_COMMANDS=FLUSHDB,FLUSHALL spring-cache: image: spring-cache container_name: spring-cache environment: - SPRING_DATASOURCE_URL=jdbc:postgresql://db:5432/postgres - SPRING_DATASOURCE_USERNAME=postgres - SPRING_DATASOURCE_PASSWORD=ekoloji - SPRING_REDIS_HOST=cache - SPRING_REDIS_PORT=6379 build: context: . dockerfile: Dockerfile ports: - '8080:8080' depends_on: - db - cache insight: image: "redislabs/redisinsight:latest" container_name: insight ports: - "8001:8001" volumes: - ./redisinsight:/db depends_on: - cache

Build & Run Application

  • Build Java Jar.
 $ source scripts/setEnv.sh $ mvn clean package
  • Docker Compose Build and Run
$ docker-compose build --no-cache $ docker-compose up -d 

Use redisinsight

bring up redisinsight using link to redisinsight This redisinsight documentation is helpful

redisinsight

When adding the database use cache and not localhost as the server name as it will resolve with the docker network

Access postgreSQL

docker exec -it db bash psql -U postgres -W # TIP: find the password for the postgres database in the docker-compose file

psql

  • see simple psql interaction above

Access cassandra

This is if using the cassandra subdirectory

docker exec -it cassandra1 bash cqlsh -u cassandra -p jph use customer; select * from customer;

Insert some test records to get started

cd scripts ./putCustomer.sh
  • NOTE: these inserted records will go into postgres but will not be cached to redis as writes are not cached

Endpoints with Swagger

Bring up the swagger interface as directed below You can see the endpoint in http://localhost:8080/swagger-ui.html page.

Proof point for the caching

Endpoints

  • Click on "customer-controller"
  • Click on "GET" for the second customer API call (circled above) that gets customers by ID.
  • Click on Try it out (also circled)

ClickExecute

  • Enter 1 for the id and click the large blue Execute button

results

  • These results should occur

Verify the data is now cached in redis

  • go back to redisinsight browser
  • select correct database connection

browse

  • Click on Browser on the left column below BROWSE
  • Click on the record called customers::1
    • Should see the record with a TTL and all the columns as shown

This demonstrates the cache is working.

Can also use the API script to see the output speed difference more easily
(there is a purposeful delay in get from postgres for demonstration purposes)

cd scripts ./getByCustID.sh

Actuator health and metrics

Spring Boot Actuator is turned on in the pom.xml Test is out using the health actuator endpoint and other documented above

Demo

Spring Boot + Redis + PostgreSQL Caching

Troubleshooting

Had an issue with Cassandra with the spring-cache also running on docker. The spring-cache application could not connect to the cassandra docker image using the image name/hostname. This connection works by specifying the docker image name in the spring cache environment variable - SPRING_CASSANDRA_HOST=cassandra1 in the docker-compose. A workaround is:

  • use docker network inspect to get the IP for the cassandra host
  • put that IP address into the docker compose - SPRING_CASSANDRA_HOST=172.24.33.1
  • restart and it will work
  • change it back to the cassandra1 and it continues to work Seems related to cassandra node seeding but it works fine when the application is outside of docker and points to localhost so not sure root cause

About

Spring Boot Redis Cache sample with swagger UI

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Java 91.7%
  • Shell 5.1%
  • Dockerfile 3.2%