Memory consideration when a 3s bucket contains 100's of millions of objects

Hi, we are looking to use Raidrive to connect to a 3S bucket that can potentially contain 100’s of millions of objects.

Now access to files through Raidrive will be infrequent and highly random (it is an archive data set)

I see references in the forum to people complaining that Raidrive is using a significant amount of memory. Is it at all possible to limit the memory used ? For example, what if we set the Lifetime: Read and Write times down to minimal values?

We are happy if access is a little slow, as the system must remain stable.

Please let me know - cheers.

Hi~ @aosborne ,

RaiDrive, by default, manages all the basic information of a file in memory.
So, if there are many files in the remote, and you see the list of all the files, all the information of the files exists in memory.
So, if you have a lot of files, and if you list them all, you can’t avoid a significant increase in memory.
We don’t put any limit on the memory we use.
Limiting the time of the cache file only partially affects the memory it uses.

We’ve been thinking about this for a long time, but so far we haven’t come up with a good enough alternative.
We will continue to research.

Ok, so if we don’t “list” the files, eg. dir /s then raidrive does not load information into memory - is that correct ?

Hi~ @aosborne ,

Yes. That’s right.
RaiDrive retains information when reading and writing files and when retrieving lists.
Also, when reading or writing a file, information is stored for cache management, so please manage the cache timeout appropriately.
Opening very large files increases memory.