gpt4 book ai didi

C# WebClient 禁用缓存

转载 作者:IT王子 更新时间:2023-10-29 04:15:44 27 4
gpt4 key购买 nike

美好的一天。

我在我的 C# 应用程序中使用 WebClient 类以便每分钟下载相同的文件,然后应用程序执行简单的检查以查看文件是否已更改,以及是否它确实用它做了一些事情。

因为这个文件每分钟下载一次,WebClient 缓存系统正在缓存这个文件,而不是再次下载这个文件,只是简单地从缓存中获取它,这会妨碍检查如果下载的文件是新的。

所以我想知道如何禁用 WebClient 类的缓存系统。

我试过了。

Client.CachePolicy = new System.Net.Cache.RequestCachePolicy(System.Net.Cache.RequestCacheLevel.BypassCache);

我也试过标题。

WebClient.Headers.Add("Cache-Control", "no-cache");

效果不佳。那么我怎样才能永久禁用缓存呢?

谢谢。

编辑

我还尝试了以下 CacheLevels:NoCacheNoStoreBypassCacheReload。没有效果,但是如果我重新启动我的计算机,缓存似乎被清除了,但我不能每次都重新启动计算机。

UPDATE in face of recent activity (8 Set 2012)

The answer marked as accepted solved my issue. To put it simple, I used Sockets to download the file and that solved my issue. Basically a GET request for the desired file, I won't go into details on how to do it, because I'm sure you can find plenty of "how to" right here on SO in order to do the same yourself. Although this doesn't mean that my solution is also the best for you, my first advice is to read other answers and see if any are useful.

Well anyway, since this questions has seen some recent activity, I thought about adding this update to include some hints or ideas that I think should be considered by those facing similar problems who tried everything they could think off, and are sure the problem doesn't lie with their code. Likely to be the code for most cases, but sometimes we just don't quite see it, just go have a walk and come back after a few minutes, and you will probably see it point blank range like it was the most obvious thing in the first place.

Either way if you're sure, then in that case I advise to check weather your request goes through some other device with caching capabilities (computers, routers, proxies, ...) until it gets to the intended destination.

Consider that most requests go through some of such devices mentioned before, more commonly routers, unless of course, you are directly connected to the Internet via your service provider network.

In one time my own router was caching the file, odd I know, but it was the case, whenever I rebooted it or connected directly to the Internet my caching problem went away. And no there wasn't any other device connected to the router that can be blamed, only the computer and router.

And by the way, a general advice, although it mostly applies to those who work in their company development computers instead of their own. Can by any change your development computer be running a caching service of sorts? It is possible.

Furthermore consider that many high end websites or services use Content Delivery Networks (CDN), and depending on the CDN provider, whenever a file is updated or changed, it takes some time for such changes to reflect in the entire network. Therefore it might be possible you were in the bad luck of asking for a file which might be in a middle of a update, and the closest CDN server to you hasn't finished updating.

In any case, specially if you are always requesting the same file over and over, or if you can't find where the problem lies, then if possible, I advise you to reconsider your approach in requesting the same file time after time, and instead look into building a simple Web Service, to satisfy the needs you first thought about satisfying with such file in the first place.

And if you are considering such option, I think you will probably have a easier time building a REST Style Web API for your own needs.

I hope this update is useful in some way to you, sure it would be for me while back. Best of luck with your coding endeavors.

最佳答案

您可以尝试在每次下载文件时将一些随机数作为查询字符串的一部分附加到您的网址。这确保了 url 每次都是唯一的。

对于前

Random random = new Random();
string url = originalUrl + "?random=" + random.Next().ToString();
webclient.DownloadFile(url, downloadedfileurl);

关于C# WebClient 禁用缓存,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/3812089/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com