Jump to content

Finally beaten Microsoft...after 15 years


Mallette

Recommended Posts

Check out RevoDrive. It's bascially a RAM drive with circuits to make it both persistive without power as well as appearing as a "normal" drive to the system. You might also look to see if a software RAM disk would work. It has the advantage of lacking the overhead of a disk OS so the speed is limited only by the processor and bus. Downside of that is if you crash whatever is in the RAM disk is gone. Of course, this may well be the case anyway even with mech or SSDs.

One of the "old" OS systems I once used had a RAM drive utility that would survive a warm reboot. It was very cool. We'd copy the OS up on power up and warm reboots, when necessary, were a few seconds.

The above tweaks may well significantly improve your throughput and, as I mentioned, it's quite measureable in the bottom line by simply multiplying the operator cost per minute by the speed up. Presented to management that way, even things that appear expensive can suddenly look pretty good.

I always point out that equipment is a one time cost (at least for the life of the hardware) while labor is fixed and ongoing.

Dave

PS - After pondering what you were saying about disk speed being your main limitations, you really should first survey the available RAM disk utilities and try that. I rather doubt the loss of cache in the event of a crash would be any more of a problem than with a conventional drive. I believe W7 64 pro supports up to 192gb RAM, so you could have beau coups of system RAM and a 150gb or so of RAM disk. That should speed things up a bit if it is, as you said, all about drive speed. I'd be interested in the results of any test you might run.

Link to comment
Share on other sites

  • Replies 46
  • Created
  • Last Reply

Top Posters In This Topic

Thanks for the tips!! I will see if I can get my hands on one, even if it comes out of my pocket first.

I will keep you informed!!!

Coolies. Sounds like a fun project. A quick search for RAM disk software showed a number of W7 compatible ones. No recommendations since I've not tried any of them, but the price is right. See what your maximum virtual memorie needs are for your processes and get enough RAM to handle it in the RAM disk. You mentioned 100mb files, not that large. RAM is going for well sub 100.00 for 8 or more gb.

Dave

Link to comment
Share on other sites

We did a fast calculation and the developers dont think we would be winning much on the rip side, the archiving is itting the hard disk more than the ripping. We would be gaining around in total 5% at the most.

I still want to try this though because some of the larg throuput printers are being printed to file so this would bring a large plus. An example for a full color banner which is 2 meter 50 by 5 meter is RIP speed: 798.97 m²/h and Print speed: 89.12 m²/h. This is printing to a "fast" hard disk so I think these printers would benifit greatly from a SSD.

Link to comment
Share on other sites

Hmm...so you have enough RAM for the processing without file swapping to HDD?. What happens to the file once it is processed? I am asking about process flow here for a reason. That is, if the PC is idle whilst the file is written, you definitely want to speed that up. If your systems are REALLY stable, using a RAM drive as a buffer to unload the computer's RAM so you can start another job would be pretty easy and speed things up a LOT.

If you have ANY doubts and don't want to lost work in a crash, you might look at a conventional SSD which tops out at around 500MB/s on writes. Of course, if that is making you money by unloading your CPUs quicker, you might make twice as much with a RevoDrive 3 X2 which I've seen reports of up to 1200 MB/s write speeds.

You'd need to use a "watched folder" utility such that anything copied to the SSD/Revo was immediately then recopied to the server or conventional SSD. No rocket science or programming required.

Not sure about your file sizes, but a 700.00 Revo 3 2X has a 240GB capacity...considerable. Should be able to keep your PCs processing instead of storing.

Dave

Link to comment
Share on other sites

Thanks for me being able to bounce this off your more knowledgable head. I am more of a hands on LFP Printer repair tech who slipped into the software so my hardware savvy is under paar.

To the work flow.

The file sizes vary depending on what is being printed and how big, the producers have learned early to try to keep the file sizes low due to this exact problem of RIP times. Of course the more layers and the more complex the file the longer it takes (less so now with the Adobe APPE for PDF).

Let´s say to get a good output depending on design you are haveing files starting at 30-35 MB. But for RIP´s it is not the data size which is important it is the complexity, how many color transfers, are the colors in gamut (gamut mapping) how many layers, smooth shades etc. This is what hits the proc and only with really really big complex files does it start to swap to the HDD.

After RIPing it gets pushed to the print queue and that is when it gets real slow because the speed comes from the printer. That is why many of the very large printers have their own front end and do not need our rip hence the Print to file. The other printers are connected either over usb2 or tcp/ip and that is the bottle neck of the system because they have a limited internal memory (if any) and we push, they print, purge then we push again. At for production of speeds of 20m² and hour this takes a while.

I will bounce your idea around here with dev and see what they think, once again thanks for the input! Next time you are in Germany I will invite you for a cold one!!!

Link to comment
Share on other sites

Thanks for me being able to bounce this off your more knowledgable head. I am more of a hands on LFP Printer repair tech who slipped into the software so my hardware savvy is under paar.

Hey, it's fun. Bear in mind that I am NOT a real computer guru or anything, though a signicant part of my profession is reliant on them so I've delved into price/performance issues pretty heavily. More important is process flow analysis, which doesn't depend at all on knowledge of the process itself but asking the right questions. From our brief exchange we've found some ares to look at. At one point, my understanding was that disk swap issues (essentially a matter of insufficient RAM or the wrong OS version) was the main issue, as well as very large file sizes. At this point, I am not quite so sure. The file sizes you mention really aren't that large. We deal with 10 or more gb files regularly in video so drive throughput can be a MAJOR bottleneck, and also GPU performance. Those thousand dollar plus GPUs start looking downright cheap when they speed up your throughput by 20 times or so. Same for thousand dollar plus SSDs and sophisticated RAM systems.

What you need to do is analyze each stage and locate the bottlenecks. Once isolated, look at solutions. Just because it APPEARS pricey don't be intimidated. What you want to take to management isn't hardware costs but process costs.

"Boss, it costs X to do it this way and X to do it that way. Which way do you want to do it?"

You'll know the answer before you ask the question if you've done your homework.

One other battle I had to fight recently. Downright dumb, but there it was. We outgrew our server and are due to going entirely to 1080P video acqusition are adding a couple of hundred gb per month to our archives. I told IT we needed at leat 36tb of storage and high speed access. They came back with 100,000.00 for their "solution." So, I asked "Why do we need a server?" They looked at me like I was an idiot and said "To allow intranet access to these files and large scale protected storage."

WRONG. The only function a "server" provides beyond NAS is that of serving applications. We don't need to serve any applications from our assets storage, only files. Further, we can use a 10GBBaseT network to provide real-time 5 GB/s throughput that will allow direct editing on the NAS without having to copy files to the video edit system...a MAJOR time and money saver. And we can do this for 90k LESS than thier file server "solution." Speaking of 10GBBaseT, that is a network technology they weren't even familiar with even though it's several years old. It is expensive for enterprise normal LAN applications, but extremely cost effective for local, high throughput needs such as media...and perhaps applications like yours.

I am not knocking IT, regardless of how it sounds. They are generalists and cannot know every discipline, especially ours.

The diagram is what I provided IT. The 2 10GbaseT ports allow us to edit directly with insignificant latency on the NAS itself. It supports 4 1000BaseT NIC and Link Aggregation (LACP) which provides maximum 1000BaseT performance to our local domain as well as to ITs server.

Cost of the NAS specified is about 10k.

Dave

post-9494-13819690282486_thumb.jpg

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...