gpt4 book ai didi

Unable to get collection from Redis cache(无法从Redis缓存获取集合)

转载 作者:bug小助手 更新时间:2023-10-22 16:56:52 26 4
gpt4 key购买 nike



We are using Redis cache for storing data in cache in our application. We are directly using
@Cacheable to allow caching and using redis underneath to cache. Below is the config

我们使用Redis缓存将数据存储在应用程序的缓存中。我们直接使用@Cacheable来允许缓存,并使用redis在下面进行缓存。以下是配置


Redis Config -

Redis配置-


@Configuration
@EnableCaching
@RequiredArgsConstructor
public class RedisConfig implements CachingConfigurer {

@Value("${spring.cache.redis.time-to-live}")
Long redisTTL;

@Bean
public RedisCacheConfiguration cacheConfiguration(ObjectMapper objectMapper) {
objectMapper = objectMapper.copy();
objectMapper.activateDefaultTyping(objectMapper.getPolymorphicTypeValidator(), ObjectMapper.DefaultTyping.NON_FINAL, JsonTypeInfo.As.PROPERTY);
objectMapper.registerModules(new JavaTimeModule(), new Hibernate5Module())
.setSerializationInclusion(JsonInclude.Include.NON_NULL)
.disable(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES)
.disable(DeserializationFeature.ADJUST_DATES_TO_CONTEXT_TIME_ZONE)
.disable(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS)
.disable(SerializationFeature.FAIL_ON_EMPTY_BEANS)
.enable(DeserializationFeature.ACCEPT_EMPTY_STRING_AS_NULL_OBJECT)
.setVisibility(PropertyAccessor.FIELD, JsonAutoDetect.Visibility.ANY);
return RedisCacheConfiguration.defaultCacheConfig()
.entryTtl(Duration.ofDays(redisTTL))
.disableCachingNullValues()
.serializeKeysWith(RedisSerializationContext.SerializationPair.fromSerializer(new StringRedisSerializer()))
.serializeValuesWith(RedisSerializationContext.SerializationPair.fromSerializer(new GenericJackson2JsonRedisSerializer(objectMapper)));
}

@Bean
public RedissonClient reddison(@Value("${spring.redis.host}") final String redisHost,
@Value("${spring.redis.port}") final int redisPort,
@Value("${spring.redis.cluster.nodes}") final String clusterAddress,
@Value("${spring.redis.use-cluster}") final boolean useCluster,
@Value("${spring.redis.timeout}") final int timeout) {
Config config = new Config();
if (useCluster) {
config.useClusterServers().addNodeAddress(clusterAddress).setTimeout(timeout);
} else {
config.useSingleServer().setAddress(String.format("redis://%s:%d", redisHost, redisPort)).setTimeout(timeout);
}
return Redisson.create(config);
}

@Bean
public RedissonConnectionFactory redissonConnectionFactory(RedissonClient redissonClient) {
return new RedissonConnectionFactory(redissonClient);
}



@Bean
public RedisCacheManager cacheManager(RedissonClient redissonClient, ObjectMapper objectMapper) {
this.redissonConnectionFactory(redissonClient).getConnection().flushDb();
RedisCacheManager redisCacheManager= RedisCacheManager.builder(this.redissonConnectionFactory(redissonClient))
.cacheDefaults(this.cacheConfiguration(objectMapper))
.build();
redisCacheManager.setTransactionAware(true);
return redisCacheManager;
}

@Override
public CacheErrorHandler errorHandler() {
return new RedisCacheErrorHandler();
}

@Slf4j
public static class RedisCacheErrorHandler implements CacheErrorHandler {

@Override
public void handleCacheGetError(RuntimeException exception, Cache cache, Object key) {
log.info("Unable to get from cache " + cache.getName() + " : " + exception.getMessage());
}

@Override
public void handleCachePutError(RuntimeException exception, Cache cache, Object key, Object value) {
log.info("Unable to put into cache " + cache.getName() + " : " + exception.getMessage());
}

@Override
public void handleCacheEvictError(RuntimeException exception, Cache cache, Object key) {
log.info("Unable to evict from cache " + cache.getName() + " : " + exception.getMessage());
}

@Override
public void handleCacheClearError(RuntimeException exception, Cache cache) {
log.info("Unable to clean cache " + cache.getName() + " : " + exception.getMessage());
}
}
}

Service class -

服务等级-


@Service
@AllArgsConstructor
@Transactional
public class CompanyServiceImpl implements CompanyService {

private final CompanyRepository companyRepository;

@Cacheable(key = "#companyName", value = COMPANY_CACHE_NAME, cacheManager = "cacheManager")
public Optional<CompanyEntity> findByName(String companyName) {
return companyRepository.findByName(companyName);
}

}

Company class -

公司类别-


@Entity    
@Jacksonized
@AllArgsConstructor
@NoArgsConstructor
public class CompanyEntity {

@Id
private Long id;

@ToString.Exclude
@OneToMany(mappedBy = "comapnyENtity", cascade = CascadeType.ALL,fetch = FetchType.EAGER)
private List<EmployeeEntity> employeeEntities;

}

Once we run the service, caching gets done properly too. Once we fire the query, we get following record in cache -

一旦我们运行了服务,缓存也会正常完成。一旦我们启动查询,我们就会在缓存中得到以下记录-


> get Company::ABC

" {"@class":"com.abc.entity.CompanyEntity","createdTs":1693922698604,"id":100000000002,"name":"ABC","description":"ABC Operations","active":true,"EmployeeEntities":["org.hibernate.collection.internal.PersistentBag",[{"@class":"com.abc.entity.EmployeeEntity","createdTs":1693922698604,"Id":100000000002,"EmployeeEntity":{"@class":"com.abc.EmployeeLevel","levelId":100000000000,"name":"H1","active":true}}]]}"

“{”@class“:”com.abc.entity.CompanyEntity“,”createdTs“:1693922698604,”id“:100000000002,”name“:”abc“,”description“:“abc Operations”,”active“:true,”EmployeeEntities“:[”org.hibernate.collection.internal.PersistentBag“,[{”@class“:”com.ibm.abc.entity.EmplyeeEntity”,“createdTs”:169392269 8604,100000000000,“name”:“H1”,“active“:true}}]]}”




But while we try to execute the query the second time, it still goes inside the cache method with below logs -

但是,当我们第二次尝试执行查询时,它仍然会进入下面日志的缓存方法中-


    Unable to get from cache Company : Could not read JSON: failed to lazily initialize a 
collection, could not initialize proxy - no Session (through reference chain:
com.abc.entity.CompanyEntity$CompanyEntityBuilder["employeeEntities"]); nested exception
is com.fasterxml.jackson.databind.JsonMappingException: failed to lazily initialize a c
collection, could not initialize proxy - no Session (through reference chain:
com.abc.entity.CompanyEntity$CompanyEntityBuilder["employeeEntities"])

I understood from various SO answers that it is due unavailability of session for proxy child object. But we are caching using EAGER mode and whole collection is present in cache too. But still it goes inside the cached method and get values from db. How can we prevent it and use it directly from cache.

我从各种SO的回答中了解到,这是由于代理子对象的会话不可用。但我们使用EAGER模式进行缓存,整个集合也存在于缓存中。但它仍然进入缓存的方法并从数据库中获取值。我们如何防止它并直接从缓存中使用它。


UPDATE
If we use LAZY loading, the collection objects doesn't get cached and comes as null. But we require cached collection as methods don't get call on order and cached method will return null later.

UPDATE如果我们使用LAZY加载,则集合对象不会被缓存,并且为null。但我们需要缓存的集合,因为方法不会按顺序调用,缓存的方法稍后将返回null。


更多回答
优秀答案推荐

Found the required answer here. My cache collection reference was not getting de-serialized properly. After applying the required changes, I was able to successfully de-serialize the cached collection object from Redis cache.

在此处找到所需答案。我的缓存集合引用未正确反序列化。在应用了所需的更改后,我能够成功地从Redis缓存中反序列化缓存的集合对象。


Changes in the existing Redis config -

现有Redis配置中的更改-


@Bean
public RedisCacheConfiguration cacheConfiguration(ObjectMapper objectMapper) {
objectMapper = objectMapper.copy();
objectMapper.enableDefaultTyping(ObjectMapper.DefaultTyping.NON_FINAL, JsonTypeInfo.As.PROPERTY);
objectMapper.registerModules(new JavaTimeModule(), new Hibernate5Module(), new Jdk8Module())
.setSerializationInclusion(JsonInclude.Include.NON_NULL)
.disable(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES)
.disable(DeserializationFeature.ADJUST_DATES_TO_CONTEXT_TIME_ZONE)
.disable(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS)
.disable(SerializationFeature.FAIL_ON_EMPTY_BEANS)
.enable(DeserializationFeature.ACCEPT_EMPTY_STRING_AS_NULL_OBJECT)
.setVisibility(PropertyAccessor.FIELD, JsonAutoDetect.Visibility.ANY).addMixIn(Collection.class, HibernateCollectionMixIn.class);
return RedisCacheConfiguration.defaultCacheConfig()
.entryTtl(Duration.ofDays(redisTTL))
.disableCachingNullValues()
.serializeKeysWith(RedisSerializationContext.SerializationPair.fromSerializer(new StringRedisSerializer()))
.serializeValuesWith(RedisSerializationContext.SerializationPair.fromSerializer(new GenericJackson2JsonRedisSerializer(objectMapper)));
}

Two new classes were added as part of the fix -

作为修复的一部分,添加了两个新类-


class HibernateCollectionIdResolver extends TypeIdResolverBase {

public HibernateCollectionIdResolver() {
}

@Override
public String idFromValue(Object value) {
//translate from HibernanteCollection class to JDK collection class
if (value instanceof PersistentArrayHolder) {
return Array.class.getName();
} else if (value instanceof PersistentBag || value instanceof PersistentIdentifierBag || value instanceof PersistentList) {
return List.class.getName();
} else if (value instanceof PersistentSortedMap) {
return TreeMap.class.getName();
} else if (value instanceof PersistentSortedSet) {
return TreeSet.class.getName();
} else if (value instanceof PersistentMap) {
return HashMap.class.getName();
} else if (value instanceof PersistentSet) {
return HashSet.class.getName();
} else {
//default is JDK collection
return value.getClass().getName();
}
}

@Override
public String idFromValueAndType(Object value, Class<?> suggestedType) {
return idFromValue(value);
}

//deserialize the json annotated JDK collection class name to JavaType
@Override
public JavaType typeFromId(DatabindContext ctx, String id) throws IOException {
try {
return ctx.getConfig().constructType(Class.forName(id));
} catch (ClassNotFoundException e) {
throw new UnsupportedOperationException(e);
}
}

@Override
public JsonTypeInfo.Id getMechanism() {
return JsonTypeInfo.Id.CLASS;
}

}

}


And


@JsonTypeInfo(
use = JsonTypeInfo.Id.CLASS
)
@JsonTypeIdResolver(value = HibernateCollectionIdResolver.class)
public class HibernateCollectionMixIn {
}


A JsonMappingException means Jackson is attempting to deserialize a Hibernate proxy object but is unable to do so because the Hibernate session is not available during deserialization.

So you need to ensure that the employeeEntities collection is properly initialized to a non-proxy state before serialization, in order for Jackson to correctly deserialize the CompanyEntity objects from the cache without requiring a Hibernate session.

JsonMappingException意味着Jackson正试图反序列化Hibernate代理对象,但由于在反序列化过程中Hibernate会话不可用,因此无法进行反序列化。因此,您需要确保employeeEntities集合在序列化之前正确初始化为非代理状态,以便Jackson能够正确地从缓存中反序列化CompanyEntity对象,而不需要Hibernate会话。


You can ensure proper initialization of the collection by adjusting your service method to force the initialization of the employeeEntities collection before the CompanyEntity is cached!

您可以通过调整服务方法来确保集合的正确初始化,以便在缓存CompanyEntity之前强制初始化employeeEntities集合!


@Cacheable(key = "#companyName", value = COMPANY_CACHE_NAME, cacheManager = "cacheManager")
public Optional<CompanyEntity> findByName(String companyName) {
Optional<CompanyEntity> companyEntityOpt = companyRepository.findByName(companyName);
companyEntityOpt.ifPresent(companyEntity -> {
companyEntity.getEmployeeEntities().size(); // Force initialization of the collection
});
return companyEntityOpt;
}

That way, the employeeEntities collection is converted from a Hibernate proxy to a regular Java collection. That should help to avoid the JsonMappingException you are experiencing during deserialization from the cache.

通过这种方式,employeeEntities集合从Hibernate代理转换为常规Java集合。这将有助于避免在从缓存进行反序列化期间遇到JsonMappingException。


This assumes you are using FetchType.EAGER, meaning the employeeEntities collection is being loaded automatically when you fetch a CompanyEntity.

这假设您使用的是FetchType.EAGER,这意味着当您获取CompanyEntity时,employeeEntities集合将自动加载。




If the issue persists, you can check if detaching the entity help:

如果问题仍然存在,可以检查分离实体是否有帮助:


@Cacheable(key = "#companyName", value = COMPANY_CACHE_NAME, cacheManager = "cacheManager")
public Optional<CompanyEntity> findByName(String companyName) {
Optional<CompanyEntity> companyEntityOpt = companyRepository.findByName(companyName);
companyEntityOpt.ifPresent(companyEntity -> {
companyEntity.getEmployeeEntities().size(); // Force initialization of the collection
// Obtain entity manager and detach the entity
EntityManager em = // get entity manager bean
em.detach(companyEntity);
});
return companyEntityOpt;
}

Detaching the entity from the Hibernate session would to turn it into a normal POJO.

将实体从Hibernate会话中分离将把它变成一个普通的POJO。


Note that to get the EntityManager you will need to inject it into your service class, and you should ensure that all relationships and attributes that will be accessed later are properly initialized before detaching the entity.

请注意,要获得EntityManager,您需要将其注入到服务类中,并且应确保在分离实体之前,稍后将访问的所有关系和属性都已正确初始化。




The other approach, to avoid caching Hibernate managed entities directly or ensure that Hibernate proxies are not serialized is to use DTOs (Data Transfer Objects) to separate your persistence model from the objects you are working with in the application logic.

另一种避免直接缓存Hibernate托管实体或确保Hibernate代理不序列化的方法是使用DTO(数据传输对象)将持久性模型与应用程序逻辑中使用的对象分离。



  • Create a DTO class that corresponds to your CompanyEntity class.

  • Before caching, map your CompanyEntity instance to a DTO instance.

  • Cache the DTO instance instead of the entity instance.

  • When reading from the cache, you will get a DTO instance which you can then map back to an entity instance if necessary.


In your service class, it would look something like this:

在您的服务类中,它看起来是这样的:


@Service
@AllArgsConstructor
@Transactional
public class CompanyServiceImpl implements CompanyService {

private final CompanyRepository companyRepository;
private final ModelMapper modelMapper; // Bean for mapping entity to DTO

@Cacheable(key = "#companyName", value = COMPANY_CACHE_NAME, cacheManager = "cacheManager")
public Optional<CompanyDTO> findByName(String companyName) {
Optional<CompanyEntity> companyEntityOpt = companyRepository.findByName(companyName);
return companyEntityOpt.map(companyEntity -> {
companyEntity.getEmployeeEntities().size(); // Force initialization of the collection
return modelMapper.map(companyEntity, CompanyDTO.class); // Map entity to DTO before caching
});
}
}

In this method, you would use a ModelMapper or another mapping framework to map your entity to a DTO. That DTO would be what gets cached, avoiding the Hibernate proxy issues you are experiencing.

在这种方法中,您将使用ModelMapper或其他映射框架将实体映射到DTO。该DTO将被缓存,从而避免您遇到的Hibernate代理问题。


Remember to create a corresponding DTO for EmployeeEntity and any other entities that are part of your object graph.

记住为EmployeeEntity和作为对象图一部分的任何其他实体创建相应的DTO。


That approach would require creating additional classes and modifying your service logic but it will create a clean separation between your Hibernate entities and what gets cached, which can help avoid issues like this one.

这种方法需要创建额外的类并修改服务逻辑,但它将在Hibernate实体和缓存的内容之间创建一个干净的分离,这有助于避免类似这样的问题。


更多回答

Can you illustrate then changes you had to make, when applied to your use case?

当应用于您的用例时,您能说明您必须做出的更改吗?

hello Von, sorry. with above force check too, same exception is there.

你好,冯,对不起。上述强制检查也存在同样的异常。

@Neil OK. I have edited the answer to address your comment.

@Neil好的。我编辑了答案以回应您的评论。

hello @Vonc, above suggestions looks correct but above answer worked for me. I have posted an answer.

你好@Vonc,上面的建议看起来是正确的,但上面的答案对我有效。我已经发布了一个答案。

26 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com