Yes, this is a known issue but I'm afraid there's no way easy way around this. Optimizing this would require designing a completely different plugin interface for
CopyFile() for exactly this purpose. Currently, the XAD plugin just hooks into Hollywood's file handler and pretends files in the archive are real files. So you could also open a file within a XAD archive using
OpenFile() and process it as if it was a normal file. This means that the XAD plugin has to support all typical file I/O stuff like seeking, getting the file size and so on. Of course, this is something that the XAD API doesn't support because it just supports raw extraction of files. It can't even tell the uncompressed size of a file for some formats. So the only way to make both APIs match is to buffer the entire file in memory (or in a temporary file) first.
In the end, for optimal performance and scalability you should really use
xad.ExtractFile() and
zip.ExtractFile() instead. The
CopyFile() route is really only useful for archives with not too many files and not too big files. It's just a convenience function for dealing with smaller archives. There's also some sort of a warning about
CopyFile() and the XAD plugin
here.
This doesn't seem to happen with the same conditions with the Zip plugin
Actually, it should happen with the ZIP plugin as well. The only difference is that the ZIP plugin doesn't buffer it entirely on opening but during extraction it will also need as much memory as the biggest file so you shouldn't be able to extract a 1GB file with 512MB of memory...