Gallery

The Power of a LUT

If you’re an experienced shooter or editor you know what a LUT is and all the wonderful vibrancy that comes with them or the desolate sadness they can provide. But if you are just breaking onto the scene you may not know the power of a LUT. First off, LUT stands for Look-Up Table and refers to the coloring that can be added to a shot or clip. When you add a LUT to the original files, amazing things can happen! Color grading is most often used to enhance the scene, evoke an emotion or convey a change in time. A LUT can allow for a preview of what could be after proper color grading occurs in post.

shadowhunters

This image from Shadowhunters is a great example and shows you exactly which LUT was used, free LUT f-8700-V2-STD. There are many different free LUTs available from Blackmagic, Panasonic and many other awesome free LUT sites – we’ve just provided you with a few!

Another thing to remember is that different LUTs are appropriate for different cameras and color spaces. It is important to understand which is recommended. The blog, Understanding LUTs: Why One LUT can NOT fit all shots gives an indepth look at the different LUTs and their purposes, ie technical, creative and camera LUTs.

hayslett

This example from Ryan Hayslett shows how different LUTs can affect the same shot, turning it from morning to night instantly.

So the best way I could think of to end this blog was with a cat video – but wait! It’s a cat video that does a great job explaining the different uses of LUTs and it’s from our good friends at Ground Control. #happygrading

screen-shot-2016-12-21-at-2-11-26-pm

Gallery

ShotPut Pro 6: Tips and Tricks

ShotPut Pro 6 has now been out for four months. We have been incredibly humbled by the positive response from our faithful customers as well as our new customers. With any major upgrade, there is a learning curve. We’ve put together some tips and tricks for offloading your media with ShotPut Pro – please be sure you are on the most up-to-date version; 6.2.

  • Let’s start with a new, direct line to us! We’ve included a feedback option under the Help Menu which allows users to connect directly with the developers of ShotPut Pro. Do you have a feature request? Great! Send it to us. Did you find a bug? It happens! Send it to us. Do you have a question you just CAN NOT figure out? No problem! Send it to us.
  • If you’re offloading CARDS we strongly recommend using ‘Source Integrity’ under Advanced Preferences. This option reads the card a second time for the checksum values. While this takes more time, it gives the added assurance that problems aren’t occurring on the incoming side such as overheating readers, bad cable connections, or a failing card. These are actually the most common reason an offload errors. If your source is say a digital recorder, these type errors are very unlikely and you can turn Source Integrity OFF (causing a single read of the source for both copies and checksum values).
  • If you’re interested in more speed but still want the safety of checksums, turn OFF the PDF report option and use text or CSV type logs. PDFs are more time consuming because we’re extracting metadata from the source files, including thumbs, and generating a PDF.
  • We did not offer the File Size Comparison option in the initial release of ShotPut Pro6 but soon found that many people were comfortable with it’s risk/reward proposition and asked us to add it back in the application. We did so with a twist–adding the option to still gather and report the XXHash checksums from the source files so you’d have these in your reports for downstream copy comparisons.
  • Most user configurations hit a bottleneck of either a card or disk maximum bandwidth or available bus throughput. Few users have super setups or fast RAIDs and SSDs everywhere. So a majority of offloads are down around the 100MB/s range (top speeds of most spinning media or cards).

Hopefully these tips and tricks have shed a little more light on the power of ShotPut Pro 6. If you are interested in upgraded your ShotPut Pro 5 license or purchasing a full version please visit our website here. #offloadconfidently

Gallery

Checksums Part 2: Define and Decide

This is a multi part blog. As more blogs are posted, links to those posts will be included in this blog. Before reading further, we encourage you to begin with Checksums Part 1: The 5 W’s which defines what a checksum in and why it’s important in the media and entertainment industry.

For Part 2, we asked YOU what you were most interested in and many answered they would like each checksum defined and explained as to which is better to use in different situations. Obviously you are all using ShotPut Pro for your offloading right? Perhaps you are researching which offloading application is right for you. This blog will also discuss appropriately comparing offload applications and their use of checksums.

We will not reinvent the wheel but we will give credit where credit it due! All sources used to help explain and define checksums have been appropriately included. If you’re looking for some light reading… These are not be for you! But if you are looking for more in-depth information, they have plenty.

xxHash – xxHash is an extremely fast, non-cryptographic hash algorithm, working at speeds close to RAM limits. It is proposed in two flavors, 32 and 64 bits. (SMHasher on github.io)

For ShotPut Pro, ShotSum and PreRoll Post we use xxHash 64 bit. We recommend using XXHash as the checksum type unless you have a requirement for some other type. XXHash can out perform MD5 for example because it can go at the speed of your RAM whereas MD5 is a CPU dependent process.

MD5 – The MD5 algorithm is a widely used hash function producing a 128-bit hash value. It is optimal as a checksum to verify data integrity. (Wikipedia MD5)

For years MD5 was the fastest and most secure checksum available. Although xxHash is becoming more widely used there are still many companies that require the MD5 checksum for data integrity.

checksum-menu

The primary things to consider when choosing a checksum is how fast is it and when talking about files, what is the chance of a checksum collision?  The collision chance is the probability that two different files will map to the same checksum value.  xxHash is great because it is fast while still having a low probability of collision.  Many even consider hash functions more secure than a byte by byte comparison because the chance that hardware gives back the wrong results is in many cases higher than the chance of an checksum collision.

SHA-1 (Secure Hash Algorithm 1) – is a cryptographic hash function. SHA-1 produces a 160-bit (20byte) hash values known as a message digest. A SHA-1 hash value is typically rendered as a hexadecimal number, 40 digits long. (Wikipedia SHA-1)

SHA-2 (Secure Hash Algorithm 2) 256 and 512 – this is an upgrade to SHA-1 and includes six hash functions, Imagine Products applications offer two of the six. Cryptographic hash functions are mathematical operations run on digital data; by comparing the computed “hash” (the output from execution of the algorithm) to a known and expected hash value, a person can determine the data’s integrity. (Wikipedia SHA-2)

MD5 and some of the SHA checksum algorithms are sometimes still used as a requirement for different insurance companies or by the government because they are older and established, but they were designed to be cryptographic hashes.  Cryptographic hashes are those originally designed to store things like passwords.  They were meant to be complex and sometimes even slow to ensure that passwords were safe.  This isn’t ideal for many of us which is why xxHash was created.

Anymore, unless you have a specific requirement from a vendor, we recommend xxHash or sometimes MD5 because the speed of the others is usually not worth the trade off in collision space. In the cases of SSD to SSD copies you most likely will see a performance hit with any of our algorithms other than xxHash.

 

Comparing Checksum Applications

It’s a good idea to choose the right tools for the job. Obviously we think our products are the best (and so do thousands of others!) but it’s a good idea to test different workflow applications to be sure you are getting what you need to ensure data integrity and accuracy.

Here’s one thing to remember…

Computers are designed with a combination of caches along the data-handling stream. They’re present in hard disks, in the connection ports and inside the computer’s operating system. The idea is to speed up the return of data requests for ‘known’ recently accessed items.

Think of how your web browser caches web pages you’ve previously visited. Caching browser history allows them to quickly present recent pages when requested again without having to go back and download the entire page each time.

The Apple operating system has a similar methodology (as do hard drives themselves). Items recently accessed are kept in a revolving cache of RAM for fast presentation.

So when an application asks the operating system to read back the most recent file from the output hard disk the Mac OS says “Oh! No need to go get that again, I have a copy right here!” and simply returns the cached information. This is great if you’re not trying to actually compare and verify one copy to another. Instead, it’s just a repeat of the source file and not a fresh full read of the file from the output disk, which is meaningless for verification purposes and doesn’t even offer the security of comparing file sizes.

In fact, with Apple operating systems you can not obtain a hard disk read of files (instead of from cache) without explicitly circumventing the cache–a method only seasoned programmers would be aware of or those incredibly interested in how checksums actually work – like yourselves!activity-monitor-icon

To test different offloading applications, open the Activity Monitor utility on the Mac. Boot it up and click on ‘Disk’. Then open the Terminal utility and flush the cache by typing in the command “SUDO PURGE”.

Once you’re ready, do a reasonable size offload–say 15 GB. Then look at the Disk Read and Write GBs in the lower right table. In order to perform checksum comparisons the Read GBs should be double the Write GBs. That’s because you’re reading once from the source, writing once to the output drive, then reading back from the output drive to calculate the checksums.

If the application or method you’re considering using in your workflow doesn’t have double the read GBs compared to the write GBs then it’s not actually retrieving the disk’s content to compare with the source files and is less secure than true checksum comparisons. In other words, the final destination of your files may not actually match the source – this is a big problem if you are truly concerned with data integrity. Remember, if it seems too good to be true – it probably is.

That’s all for Checksums Part 2. If you have questions or feedback leave them here, email us or post it on any of our social media pages. Part 3 is coming soon – let us know what you want to learn more about. And always – Offload Confidently!