https://ift.tt/2UgxVJ7
These are the best Huawei P30 cases to protect the triple-lens flagship https://ift.tt/2I3QxWi Despite having a more impressive bigger sibling, the Huawei P30 is a heck of a flagship phone, with Huawei’s latest and most powerful processor, a variety of stunning colors (check out the Breathing Crystal and Amber Sunrise variants), and an amazing triple-lens camera with a 3x optical zoom and Huawei’s new SuperSpectrum technology. But while it’s an impressive flagship phone, it’s just as prone to damage as any other phone. Since you’re likely to be spending upward of $900 on it, protection is a must. We’ve done the hard work for you; here are the best Huawei P30 cases you can buy right now. Olixar Ultra-Thin Clear CaseIf you really can’t bear to hide your phone away from the world with a large case, then you should consider a clear case. Olixar’s ultra-thin protective cases use TPU to shield your device from hazards, and the soft but durable material is excellent at absorbing shocks from bumps and small drops, while also keeping it safe from dirt and fingerprints. The soft material offers extra grip and even comes with a raised lip along the edges of the screen to protect it further. It’s extremely slim and completely clear, so you can forget it’s there. However, that slim build means that it’s not as protective as bigger cases. Still, it’s a good choice if you want to pretend you don’t have a case.
Snakehive Vintage Leather WalletGetting a case doesn’t have to mean adding featureless black plastic to your phone. A case can add style as well as protection, and wallet cases are a great way to do exactly that. This wallet case from Snakehive is made from European full-grain cowhide nubuck leather, so it looks amazing, while also offering great protection when combined with the inner plastic case. The leather wraps around your phone while not in use, providing complete protection for your device and can fold into a horizontal stand when required. You’ll find soft leather lining the inside of the leather, along with three card slots for spare cards, cash, or travel tickets. You can even customize the case with your initials for a small amount extra. Spigen Slim ArmorSpigen is one of the most well-known names in phone protection and it brought some of its best to the P30. The Slim Armor is something of a classic, and it offers a sleek style paired with some excellent protection, all packed into a slim package. It uses a combination of soft TPU and hard polycarbonate to offer protection against drops with a strong and resistant backbone, and it also comes with a horizontal kickstand that folds into the case when not needed. A raised lip protects the screen and the camera lenses from scratches. At $40, it’s admittedly quite expensive, but Spigen is known for its price drops, so it’s definitely worth keeping an eye out for deals.
Krusell Sunne CoverNot all leather cases have to be wallet cases, and if you’re looking for the elegance of leather without the full coverage of a wallet case, then check out Krusell’s Sunne case. It’s made from soft genuine leather laid over a hard polycarbonate shell. The leather ages with your device, creating a patina unique to your own case. The inside of the case is lined with soft material so your phone isn’t scratched. It’s slim and stylish and comes in either vintage nude or vintage black. It’s a simple case, but it’s elegant and beautiful. However, it does have open areas around the sides, so it’s not as protective as some other cases.
Ringke Fusion-XRingke specializes in rugged protective cases that certainly have a unique style that goes beyond the usual black plastic look. The Fusion-X is named well, as it uses a fusion of different materials in its construction. A hard and clear backplate provides scratch-protection while allowing the design of your phone to shine through. But you will also find a solid black TPU bumper around the edges, adding shock and drop protection that is rated at a certified military grade of MIL-STD 810G. It has raised bezels to protect the screen, and you will find nicely clicky button covers and even a dust cover for the charging port. A great protective case at a good price.
VRS Designs Layered Dandy PU Leather Wallet CaseLeather is great, but not everyone likes genuine leather. Thankfully, synthetic PU leather exists and VRS Design’s Layered Dandy case is one of the best examples of why PU leather is just as good as real leather in many ways. Thanks to the complete coverage, your phone is fully protected in a bag or pocket, while the inner polycarbonate case adds extra protection. There are three slots and an inner pocket for credit cards, travel tickets, spare cash, and it closes securely with a magnetic strap. It’s a slim and beautiful case that adds executive style to your phone. A great alternative to real leather wallet cases.
Official Huawei Smart Flip CaseWho else is better placed to offer you great protection than the phone’s own manufacturer? Huawei has traditionally offered some cool cases, and its Smart Flip cases have always been a particular highlight. Made from a synthetic PU leather, it provides protection for the entire device, though it’s somewhat thinner than most other wallet cases. However, the really exciting part exists on the front. Once clipped onto your phone, the case communicates with it, turning the transparent section into a smart window that shows your notifications, the time and date, and other details. What better feature to show off your awesome phone?
Digital Trends via Digital Trends https://ift.tt/2p4eJdC March 31, 2019 at 07:34AM
0 Comments
https://ift.tt/2JTAdtH
There's Something For Everyone In Amazon's World Backup Day Sale https://ift.tt/2FGQ21u
Best Tech DealsThe best tech deals from around the web, updated daily.
Happy World Backup Day! To celebrate this rare, actually-useful fake holiday, Amazon’s running a huge one-day sale on everything from microSD cards to SSDs to hard drives to NAS enclosures. Unlike most storage sales, this one includes deals from multiple brands, and all the big names like Samsung, Synology, SanDisk, and WD are represented. A few of our favorites are below, but seriously, there’s something for everyone in here, so you owe it to yourself (and your data) to check out the full sale.
Digital Trends via Gizmodo https://gizmodo.com March 31, 2019 at 07:06AM
https://ift.tt/2JVVClR
The best Surface Pro cases and covers https://ift.tt/2HQxG1i Microsoft’s Surface Pro has been our favorite 2-in-1 for the past few years and the latest iteration on that now-classic design is no different. But having a super-portable and powerful 2-in-1 is no good if it gets damaged while you cart it around. That’s where a great case or sleeve can come in, providing at least one more layer between your precious convertible tablet and the rigors of the outside world. These are the best Surface Pro cases you can buy. Kensington Black Belt Rugged CaseWhere most cases will offer soft body protection against the elements, Kensington’s Blackbelt 2nd Degree Rugged Case opts for a polycarbonate body that meets military grade testing standards against drops. You can feel secure if you have slippery fingers, your Surface Pro should survive a fall. Designed with the Surface Pro in mind, this case has specific cutouts for ports, easy Type Cover attachment, and unobstructed audio. There’s also a handy strap for carrying and another that holds the Type Cover in place while you’re on the go. The Surface Pen holder keeps it within easy reach but ensures it won’t go missing. There are no color or material options with this case, but the understated look of the Kensington means it is unlikely to offer anyone in particular and should be a perfect fit for others’ aesthetic proclivities.
Urban Armor Gear MetropolisIt might look weighty, but the UAG Metropolis is a supremely lightweight and cleverly designed Surface Pro case. It combines military drop-test certified impact protection along all four sides and all four corners with fantastic compatibility. It supports the Surface Pro 4, 5, and 6, as well as the 2017 Surface Pro. It has space for both the fantastic Type Cover accessory and Microsoft’s Surface Pen, with a magnetic holder to make sure it doesn’t get lost. It also comes equipped with an aluminum stand that provides five angular positions and portrait viewing, letting you use the device hands off for media viewing or presentation purposes. And yet it still provides normal access to the touchscreen, buttons, and ports, and unhindered audio from the built-in speakers. Available in three color options, the UAG Metropolis also has options for carry straps at a higher price point.
Fintie CaseThe Fintie Case might be one of the cheapest Surface Pro cases we recommend, but that doesn’t mean it’s not worth considering. At less than $20, it is fully compatible with every Surface Pro from the past few years and has full support for the Type Cover and its own pen holder. The flip cover acts as an optional wrist rest when needed and the smart cut outs provide plenty of ventilation for the Surface Pro, as well as access to all ports. Available in a wide array of colors and patterns, the Fintie case is a fantastically affordable and attractive case that should be considered by any potential buyer.
Tomtoc 360 Protective SleeveIf you’re more interested in a sleeve that focuses on protecting your Surface Pro when you aren’t using it, the Tomtoc 360 Protective Sleeve is a good option to consider. It’s compatible with the last few generations of Surface Pro and many other laptops which fit into the 12.3-inch form. Unlike some sleeves which focus on protecting the top and bottom of the laptop, this sleeve has padding all around the edges too, making sure that no matter what the impact your Surface Pro will have some protection from it. It’s also lightweight and is trim enough that it could fit inside a larger carry case or laptop bag for additional layers of protection. Available in a variety of colors and patterns, the Tomtoc 360 Protective Sleeve is well worth considering if you aren’t interested in always-on cases.
ProCase Premium Folio CoverThe ProCase Premium Folio exudes a classic sophistication with its combination of composition leather and a soft interior. It protects against scratches and scrapes and has specific cut outs to give full access to ports, buttons, and the camera. The built in stand is fully adjustable to any angle you want and an elastic strap holds the cover closed when the Surface Pro is not in use. The Surface Pen can be secured in an easy-access loop on the side. Compatible with the 2017 Surface Pro, as well as the Surface Pro 4, 5, and 6, the ProCase Premium Folio works with and without the Type Cover accessory too. It’s available in both brown and purple exterior colors.
Tomtoc Laptop Shoulder BagThe TomToc Laptop Shoulder Bag isn’t specifically designed with the Surface Pro in mind, but it fits it well all the same. Designed to fit laptops that are 13.3-inches in size, it’s a little larger than the Surface Pro, but it shouldn’t allow it to move around much inside during transit. Like the Tomtoc protective sleeve, the laptop bag has 360 degree protection along all sides and edges that protects against drops and scratches, but this shoulder bag offers far more versatility. Weighing in at just 1.13 pounds, it’s very light and with a carry handle and strap, you can choose how you haul your Surface Pro around. There is even a pair of zip-pockets for accessories or other personal items that you take with you on the go. The Tomtoc Laptop Shoulder Bag is available in a wide array of colors too, from classic black and grey, to red, blue, and pink.
Digital Trends via Digital Trends https://ift.tt/2p4eJdC March 31, 2019 at 06:37AM
https://ift.tt/2V6EDhZ
How the Brain Links Gestures, Perception, and Meaning https://ift.tt/2UhbmnB Remember the last time someone flipped you the bird? Whether or not that single finger was accompanied by spoken obscenities, you knew exactly what it meant. The conversion from movement into meaning is both seamless and direct, because we are endowed with the capacity to speak without talking and comprehend without hearing. We can direct attention by pointing, enhance narrative by miming, emphasize with rhythmic strokes and convey entire responses with a simple combination of fingers. Quanta MagazineAboutOriginal story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences. The tendency to supplement communication with motion is universal, though the nuances of delivery vary slightly. In Papua New Guinea, for instance, people point with their noses and heads, while in Laos they sometimes use their lips. In Ghana, left-handed pointing can be taboo, while in Greece or Turkey forming a ring with your index finger and thumb to indicate everything is A-OK could get you in trouble. Despite their variety, gestures can be loosely defined as movements used to reiterate or emphasize a message—whether that message is explicitly spoken or not. A gesture is a movement that “represents action,” but it can also convey abstract or metaphorical information. It is a tool we carry from a very young age, if not from birth; even children who are congenitally blind naturally gesture to some degreeduring speech. Everybody does it. And yet, few of us have stopped to give much thought to gesturing as a phenomenon—the neurobiology of it, its development, and its role in helping us understand others’ actions. As researchers delve further into our neural wiring, it’s becoming increasingly clear that gestures guide our perceptions just as perceptions guide our actions. An Innate Tendency to GestureSusan Goldin-Meadow is considered a titan in the gesture field—although, as she says, when she first became interested in gestures during the 1970s, “there wasn’t a field at all.” A handful of others had worked on gestures but almost entirely as an offshoot of nonverbal-behavior research. She has since built her career studying the role of gesture in learning and language creation, including the gesture system that deaf children create when they are not exposed to sign language. (Sign language is distinct from gesturing because it constitutes a fully developed linguistic system.) At the University of Chicago, where she is a professor, she runs one of the most prominent labs investigating gesture production and perception. “It’s a wonderful window into unspoken thoughts, and unspoken thoughts are often some of the most interesting,” she said, with plenty of gestures of her own. Many researchers who trained with Goldin-Meadow are now pursuing similar questions outside the University of Chicago. Miriam Novack completed her doctorate under Goldin-Meadow in 2016, and as a postdoc at Northwestern University she examines how gesture develops over the course of a lifetime. No other species points, Novack explained, not even chimpanzees or apes, according to most reports, unless they are raised by people. Human babies, in contrast, often point before they can speak, and our ability to generate and understand symbolic motions continues to evolve in tandem with language. Gesture is also a valuable tool in the classroom, where it can help young children generalize verbs to new contexts or solve math equations. “But,” she said, “it’s not necessarily clear when kids begin to understand that our hand movements are communicative—that they’re part of the message.” When children can’t find the words to express themselves, they let their hands do the talking. Novack, who has studied infants as young as 18 months, has seen how the capacity to derive meaning from movement increases with age. Adults do it so naturally, it’s easy to forget that mapping meaning onto hand shape and trajectory is no small feat. Gestures may be simple actions, but they don’t function in isolation. Research shows that gesture not only augments language, but also aids in its acquisition. In fact, the two may share some of the same neural systems. Acquiring gesture experience over the course of a lifetime may also help us intuit meaning from others’ motions. But whether individual cells or entire neural networks mediate our ability to decipher others’ actions is still up for debate. Embodied CognitionNoam Chomsky, a towering figure in linguistics and cognitive science, has long maintained that language and sensorimotor systems are distinct entities—modules that need not work together in gestural communication, even if they are both means of conveying and interpreting symbolic thought. Because researchers don’t yet fully understand how language is organized within the brain or which neural circuits derive meaning from gesture, the question is unsettled. But many scientists, like Anthony Dick, an associate professor at Florida International University, theorize that the two functions rely on some of the same brain structures. Using functional magnetic resonance imaging (fMRI) scans of brain activity, Dick and colleagues have demonstrated that the interpretation of “co-speech” gestures consistently recruits language processing centers. The specific areas involved and the degree of activation vary with age, which suggests that the young brain is still honing its gesture-speech integration skills and refining connections between regions. In Dick’s words, “Gesture essentially is one spire in a broader language system,” one that integrates both semantic processing regions and sensorimotor areas. But to what extent is the perception of language itself a sensorimotor experience, a way of learning about the world that depends on both sensory impressions and movements? Manuela Macedonia had only recently finished her master’s degree in linguistics when she noticed a recurring pattern among the students to whom she was teaching Italian at Johannes Kepler University Linz (JKU): No matter how many times they repeated the same words, they still couldn’t stammer out a coherent sentence. Printing phrases ad nauseam didn’t do much to help, either. “They became very good listeners,” she said, “but they were not able to speak.” She was teaching by the book: She had students listen, write, practice and repeat, just as Chomsky would advocate, yet it wasn’t enough. Something was missing. Today, as a senior scientist at the Institute of Information Engineering at JKU and a researcher at the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig, Macedonia is getting closer to a hypothesis that sounds a lot like Dick’s: that language is anything but modular. When children are learning their first language, Macedonia argues, they absorb information with their entire bodies. A word like “onion,” for example, is tightly linked to all five senses: Onions have a bulbous shape, papery skin that rustles, a bitter tang and a tear-inducing odor when sliced. Even abstract concepts like “delight” have multisensory components, such as smiles, laughter and jumping for joy. To some extent, cognition is “embodied”—the brain’s activity can be modified by the body’s actions and experiences, and vice versa. It’s no wonder, then, that foreign words don’t stick if students are only listening, writing, practicing and repeating, because those verbal experiences are stripped of their sensory associations. Macedonia has found that learners who reinforce new words by performing semantically related gestures engage their motor regions and improve recall. Don’t simply repeat the word “bridge”: Make an arch with your hands as you recite it. Pick up that suitcase, strum that guitar! Doing so wires the brain for retention, because words are labels for clusters of experiences acquired over a lifetime. Multisensory learning allows words like “onion” to live in more than one place in the brain—they become distributed across entire networks. If one node decays due to neglect, another active node can restore it because they’re all connected. “Every node knows what the other nodes know,” Macedonia said. Wired by ExperienceThe power of gestures to enrich speech may represent only one way in which gesture is integrated with sensory experiences. A growing body of work suggests that, just as language and gesture are intimately entwined, so too are motor production and perception. Specifically, the neural systems underlying gesture observation and understanding are influenced by our past experiences of generating those same movements, according to Elizabeth Wakefield. Wakefield, another Goldin-Meadow protégé, directs her own lab as an assistant professor at Loyola University Chicago, where she studies the way everyday actions aid learning and influence cognition. But before she could examine these questions in depth, she needed to understand how gesture processing develops. As a graduate student working with the neuroscientist Karin James at Indiana University in 2013, she performed an fMRI studythat was one of the first to examine gesture perception in both children and adults. “We, to my knowledge, were the first people looking at gesture processing across development,” Wakefield said. “That small body of literature on how gesture is processed developmentally has important implications for how we might think about gesture shaping learning.” Wakefield’s study is not the only evidence that gesture perception and purposeful action both stand on the same neural foundation. Countless experiments have demonstrated a similar motor “mirroring” phenomenon for actions associated with ballet, basketball, playing the guitar, tying knots and even reading music. In each case, when skilled individuals observed their craft being performed by others, their sensorimotor areas were more active than the corresponding areas in participants with less expertise. (Paradoxically, some experiments observed exactly the opposite effect: Experts’ brains reacted less than those of non-experts when they watched someone with their skills. But researchers theorized that in those cases, experience had made their brains more efficient at processing the motions.) Lorna Quandt, an assistant professor at Gallaudet University who studies these phenomena among the deaf and hard of hearing, takes a fine-grained approach. She breaks gestures down into their sensorimotor components, using electroencephalography (EEG) to show that memories of making certain actions change how we predict and perceive others’ gestures. In one study, she and her colleagues recorded the EEG patterns of adult participants while they handled objects of varying colors and weights, and then while they watched a man in a video interact with the same items. Even when the man simply mimed actions around the objects or pointed to them without making contact, the participants’ brains reacted as though they were manipulating the articles themselves. Moreover, their neural activity reflected their own experience: The EEG patterns showed that their recollections of whether the objects were heavy or light predictably influenced their perception of what the man was doing. “When I see you performing a gesture, I’m not just processing what I’m seeing you doing; I’m processing what I think you’re going to do next,” Quandt said. “And that’s a really powerful lens through which to view action perception.” My brain anticipates your sensorimotor experiences, if only by milliseconds. Exactly how much motor experience is required? According to Quandt’s experiments, for the straightforward task of becoming more expert at color-weight associations, just one tactile trial is enough, although reading written information is not. According to Dick, the notion that brain motor areas are active even when humans are immobile but observing others’ movements (a phenomenon known as “observation-execution matching”) is generally well-established. What remains controversial is the degree to which these same regions extract meaning from others’ actions. Still more contentious is what mechanism would serve as the basis for heightened understanding through sensorimotor activation. Is it coordinated activity across multiple brain regions, or could it all boil down to the activity of individual cells? Mirror Neurons or Networks?More than a century ago, the psychologist Walter Pillsbury wrote: “There is nothing in the mind that has not been explained in terms of movement.” This concept has its modern incarnation in the mirror neuron theory, which posits that the ability to glean meaning from gesture and speech can be explained by the activation of single cells in key brain regions. It’s becoming increasingly clear, however, that the available evidence regarding the role of mirror neurons in everyday behaviors may have been oversold and overinterpreted. The mirror neuron theory got its start in the 1990s, when a group of researchers studying monkeys found that specific neurons in the inferior premotor cortex responded when the animals made certain goal-directed movements like grasping. The scientists were surprised to note that the same cells also fired when the monkeys passively observed an experimenter making similar motions. It seemed like a clear case of observation-execution matching but at the single-cell level. The researchers came up with a few possible explanations: Perhaps these “mirror neurons” were simply communicating information about the action to help the monkey select an appropriate motor response. For instance, if I thrust my hand toward you to initiate a handshake, your natural reaction is probably to mirror me and do the same. Alternatively, these single cells could form the basis for “action understanding,” the way we interpret meaning in someone else’s movements. That possibility might allow monkeys to match their own actions to what they observed with relatively little mental computation. This idea ultimately usurped the other because it was such a beautifully simple way to explain how we intuit meaning from others’ movements. As the years passed, evidence poured in for a similar mechanism in humans, and mirror neurons became implicated in a long list of phenomena, including empathy, imitation, altruism and autism spectrum disorder, among others. And after reports of mirroring activity in related brain regions during gesture observation and speech perception, mirror neurons became associated with language and gesture, too. Gregory Hickok, a professor of cognitive and language sciences at the University of California, Irvine, and a staunch mirror neuron critic, maintains that, decades ago, the founders of mirror neuron theory threw their weight behind the wrong explanation. In his view, mirror neurons deserve to be thoroughly investigated, but the pinpoint focus on their roles in speech and action understanding has hindered research progress. Observation-execution matching is more likely to be involved in motor planning than in understanding, he argues. Even those who continue to champion the theory of action understanding have begun to pump the brakes, according to Valeria Gazzola, who leads the Social Brain Laboratory at the Netherlands Institute for Neuroscience and is an associate professor at the University of Amsterdam. Although she is an advocate of the mirror neuron theory, Gazzola acknowledged that there’s no consensus about what it actually means to “understand” an action. “There is still some variability and misunderstanding,” she said. While mirror neurons serve as an important component of cognition, “whether they explain the whole story, I would say that’s probably not true.” Initially, most evidence for mirroring in humans was derived from studies that probed the activity of millions of neurons simultaneously, using techniques such as fMRI, EEG, magnetoencephalography and transcranial magnetic stimulation. Researchers have since begun to experiment with techniques like fMRI adaptation, which they can use to analyze subpopulations of cells in specific cortical regions. But they only rarely have the opportunity to take direct measurements from individual cells in the human brain, which would provide the most direct proof of mirror neuron activity. “I have no doubt that mirror neurons exist,” Hickok said, “but all of those brain imaging and brain activation studies are correlational. They do not tell you anything about causation.” Moreover, people who cannot move or speak because of motor disabilities like severe forms of cerebral palsy can in most cases still perceive speech and gestures. They don’t need fully functioning motor systems (and mirror neurons) to perform tasks that require action understanding as it’s loosely defined. Even in monkeys, Hickok said, there is no evidence that damage to mirror neurons produces deficits in action observation. Quandt, who considers herself a mirror neuron centrist, makes no claims about how different experiences change the function of individual cells based on her EEG experiments. That said, she is “completely convinced” that parts of the human sensorimotor system are involved in parsing and processing other people’s gestures. “I am 100 percent sure that’s true,” she said. “It would take a lot to convince me otherwise.” Researchers may not be able to pinpoint the exact cells that help us to communicate and learn with our bodies, but the overlap between multisensory systems is undeniable. Gesture allows us to express ourselves, and it also shapes the way we understand and interpret others. To quote one of Quandt’s papers: “The actions of others are perceived through the lens of the self.” So, the next time someone gives you the one-finger salute, take a moment to appreciate what it takes to receive that message loud and clear. If nothing else, it might lessen the sting a bit. Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences. More Great WIRED StoriesDigital Trends via Wired https://ift.tt/2uc60ci March 31, 2019 at 06:03AM
https://ift.tt/2COhae8
'The Matrix' Code Came From Sushi Recipes—but Which? https://ift.tt/2JSHIAQ Do you see it when you close your eyes? Does it show up in your dreams? Odds are, if you saw The Matrix in 1999 or any time thereafter, the image of green characters cascading down a black screen is cemented in your mind's eye. Despite the fact that it's in a movie with one iconic scene after the next, the tumbling green code is one of the film's most enduring images—and gives The Matrix the distinction of having one of the few title sequences in history one could call "brilliant." (I will apologize for nothing, George Lucas.) It looked cool and it summed up the question at the core of Wachowskis’ masterpiece: What if none of this is real? What if it's all been programmed? As the months and years went on, and The Matrix got picked apart, folks began to wonder where the movie's now-famous "digital rain" came from. The answer turned out to be far more fascinating than any of the film's mysteries. The code, as CNET reported in 2017, was, in fact, just a bunch of sushi recipes. Simon Whiteley is a production designer at Animal Logic in Australia, but he's best known as The Man Behind the Code. He says he ended up working on the digital rain after Lana and Lilly Wachowski vetoed a previous sequence that a design team working on The Matrix had created. "The Wachowskis didn't feel like the design was old-fashioned and traditional enough. They wanted something that was more Japanese, more manga," Whiteley says. "They asked me if I'd like to have a go working at the code, mainly because my wife is Japanese and she could help me work out the characters and give me insight into which characters were good and which weren't." So Whiteley went home and began browsing through the "stacks of Japanese cookbooks" owned by his wife, looking for inspiration. One recipe book in particular caught his eye and the recipes therein served as the basis for what would eventually become the film's iconic falling code. Over the following weeks, Whiteley painstakingly designed and painted each Japanese letter by hand. These were then delivered to Justin Marshall, now a visual effects artist at Animal Logic, who digitized them and wrote the code to make them cascade across the screen. Originally, Whiteley says, the letters were supposed to flow across the screen from left to right, but when he saw the animation he says it "wasn't evoking any emotion for me."
Matrix code creator Simon Whiteley Whiteley returned to the source. Like most Japanese texts, the recipe books were written "back to front" and sentences were read top to bottom. So Whiteley asked Marshall if he could flip the code so it flowed down from the top of the screen—and the rest is history. "The movie is very machine oriented," Whiteley says. "I love that idea that it's about something so mechanical, but amongst it the actual code is extracted from something so organic and free-flowing." Whiteley, who has worked on the visual effects for a number of blockbusters, most recently The Lego Ninjago Movie and Peter Rabbit, says he's surprised people find The Matrix title sequence so interesting all these years later. "The Matrix code was relatively simple to create," says Whiteley. "The strange thing is that it's the most iconic and lasting of all the things I've designed." That is weird, sure, but what is even weirder, in my opinion, is that no one has tried to actually make the sushi recipes embedded in The Matrix's opening credits. Whiteley says his wife still has the recipe book that inspired the digital rain, even if it is beginning to fall apart. Yet when asked to share the cooking instructions, he politely declines. "I've been kind of not wanting to tell anyone what the recipe book is, partly because that's the last bit of magic," says Whiteley. Nevertheless, Whiteley was willing to offer some clues as to which recipe book was used. "It's not actually a book," he says. "It's a magazine, but it's called a book. It's something most Japanese people would've heard of or have on their bookshelf." Whiteley also says Japanese speakers won't be able to lift the recipe straight from the movie because the digital rain is written in code. Moreover, he says, sushi recipes are usually written in hiragana and kanji, which are syllabic and logographic characters, respectively. The Matrix code, on the other hand, is stylized as katakana, which are syllabic characters used for spelling foreign words. "My wife and I have this funny argument at home," says Whiteley. "She doesn't think you can get a sushi recipe from the code because it's written in katakana. Instead she thinks it's recipes for teriyaki or ramen noodles." Check out all of our 20th anniversary coverage of The Matrix. If you want to revisit it, The Matrix trilogy is free on Amazon Prime. (Note: When you buy something using the retail links in our product reviews, we may earn a small affiliate commission. Read more about how this works.) More Great WIRED StoriesDigital Trends via Wired https://ift.tt/2uc60ci March 31, 2019 at 06:03AM
https://ift.tt/2V7RIHN
Futures Aren’t Just for Juice. They’re for Truck Routes, Too https://ift.tt/2UkALMZ When Randolph and Mortimer Duke first explained the commodities market to Billy Ray Valentine in 1983, they laid out five examples: a cup of coffee, a piece of bread, some slices of bacon, a glass of frozen orange juice, and a few bars of gold. “Some of our clients are speculating that the price of gold will rise in the future, and we have other clients who are speculating that the price of gold is going to fall. They place their orders with us, and we buy or sell their gold for them,” Randolph Duke said. Valentine got it right away: “Well, it sounds to me like you guys a couple of bookies.” Had Eddie Murphy, Ralph Bellamy, and Don Ameche been filming that seminal scene of Trading Places today, they could have used another example, albeit one harder to fit on a breakfast table: truck routes. Because now, for the first time, you can speculate on how much it will cost to send a truckful of goods between a few major American cities, more than a year before the driver climbs into the cab. The folks playing the bookies here say it will make the workings of an economically vital industry more transparent and reliable. And observers say this first trucking freight futures marketplace is just the latest sign of how trucking, long stuck in the past, is steadily rolling into the digitized 21st century. The marketplace, which launched Friday, is the work of analytics and research firm FreightWaves, Virginia-based Nodal Exchange, and DAT (once known as Dial-a-Truck), which operates the country’s largest load board, a system for connecting truckers to things that need trucking. It offers contracts up to 16 months in advance for shipments on a handful of key routes: between Seattle and Los Angeles, Los Angeles and Dallas, and along the triangle formed by Chicago, Atlanta, and Philadelphia. Say you know you’re going to be shipping 50 trucks worth of Ray Beri sunglasses from Atlanta to Chicago in April 2020, just as warm weather returns to the Second City. You head to Nodal Exchange, which would offer you a price—say $1.40 a mile—to pay for that shipment, calculated based on DAT’s data. You tell your broker to make the deal, paying for the distance you intend to drive, plus the fee that Nodal splits with DAT and FreightWaves. When the time comes, costs have gone up by 30 cents a mile. You still pay your carrier his $1.70 rate, but because you’ve settled on the $1.40 number with Nodal, they pay you back the difference. But if trucking costs drop between the time of your deal and your real world movement, you cough up the difference. The idea is to hedge your costs. That can be a big deal in a world where driver shortages, port strikes, weather, and other factors can drastically shift prices, and in which guarantees are hard to come by. Turns out, most trucking contracts are closer to handshake deals than binding agreements. If the pricing situation changes, either side can decide to renegotiate. The world of futures has been around since long before Eddie Murphy went into movies. They’re a key tool for farmers especially, who use them to lock in guaranteed prices on crops before a good or bad harvest can knock them into bankruptcy. Many futures are for physical products: oats, aluminum, cheese, oil. Others border on metaphysical. You can buy and sell futures on snowfall, box office returns, and energy prices. This new trucking market falls in the latter camp. “You’re not buying a future where a truck will show up and physically bump your dock,” says FreightWaves CEO Craig Fuller. It’s a financial instrument that offers a guarantee against an often volatile market, whether you’re shipping stuff or carrying it. Of course, it’s open to speculators, too. The fact that futures have finally come for trucking is just the latest sign that the industry is changing. It has long been a conservative business, where carriers and their customers connect through personal relationships. DAT’s load board started in the 1970s as an actual bulletin board with index cards at the Jubitz truck stop in Portland, Oregon. But it’s only in recent years that much of that activity has moved off the telephone and fax machine and onto the internet. Lately, smartphones and mandatory electronic logging devices have generated far more data than ever before, making capacity and volume easier to track and understand in real time. That has allowed for the rise of digital brokerage firms like Uber Freight, which connect carriers and shippers without anyone having to ask how anyone’s kids are doing. “I think carriers are open to selling their capacity in the market in a way they might not have been 20 years ago,” says Jon Gilbert, a supply chain management and logistics specialist with PLG Consulting. “What I view Freightwaves as being is part of the progression toward the commoditization of transportation services.” So even if Duke and Duke ended up with more frozen oranges than their bottom line could handle, at least now they’d know how best to move them to thirsty breakfasters across the country. More Great WIRED StoriesDigital Trends via Wired https://ift.tt/2uc60ci March 31, 2019 at 06:03AM
https://ift.tt/2CIy9i0
The Physics of Building Jumps in 'The Matrix' https://ift.tt/2JSQM92 Wait. You haven't seen The Matrix? It's a modern sci-fi classic and now it's also 20 years old. Well, you should watch it. Here's the basic idea—some dude (Neo) finds out he's been living in a computer program. Since his world isn't "real,” he is able to do some superhuman things—like dodge bullets and jump from one building to the next. Yes, this building jump is what I want to look at. It's one of the first real tests for Neo as he learns to manipulate this computer world. The goal is to run and jump from the top of one very tall building to the next building. Morpheus starts off to show Neo how to do it and makes it easily. Neo crashes. Good thing it's not real life. Even though this is just a computer simulation, it's still fun to consider how a human could make this jump. Let's go over two possible methods to make this jump (in the Matrix). Running Really Really FastA normal human couldn't make that building-to-building jump in the real world. But what if you could run faster? How fast would you have to run to make that jump? Of course the first question: How far apart are the building? I'll be honest. I spent quite a bit of time looking for these EXACT buildings in real life. I failed. However, it looks like it's just two normal buildings across the street from each other. Based on my measurements from real buildings (on Google maps), I think 25 meters seems fair. So, how fast do you have to run to make a jump this far? Assuming the air resistance is negligible, this becomes a standard physics projectile motion problem. Once Neo is in the air, the only force acting on him is the downward gravitational force. This means that his horizontal velocity is constant and his vertical acceleration is -9.8 m/s2 (we call this -g). The horizontal and vertical motion can be treated as two separate kinematics problems to produce the following equations. Although the horizontal and vertical motions are mostly independent, they still happen in the same amount of time (t). If I solve for the total time in one direction, I can use that in the other direction. That's just what I'm going to do. OK, so in this case I am going to assume the human (computer model of a human) is running really fast. At the edge of the building the human pushes up off the ground to initiate the jump. However, it's just a normal human running fast. This means that the vertical jump is still a normal jump with a normal vertical height. Let's say that the human can jump with a vertical height of 1 meter. This would give a hang time of 0.6 seconds. Yes, I know it seems longer than that but it's not. Now back to the horizontal motion. Neo has just 0.6 seconds to go all the way across from one building to the next. With a change in distance of 25 meters in just 0.6 seconds that means he must run with a speed of 41.7 meters per second (93 mph). I told you it was really fast. Jumping Really HardYes, this is similar to the previous jumping method. However, in this case the human is going to have a launch speed in both the vertical and horizontal directions instead of just running fast. This means that Neo will be in the air for much longer than 0.6 seconds so that he won't have to have as large of a horizontal velocity. But this also means that he is going to need a super human push on the ground to get him into the air. This human jumps with an initial velocity of v0, but that means there is a component of velocity in the x-direction and the y-direction that depends on the launch angle. What angle is the best? Well, maybe it's not the best but if you want the maximum horizontal range for that velocity then the angle should be 45 degrees. Why? I'll just leave this older derivation here—but you need to be careful. A launch angle of 45 degrees is only the maximum range for cases that start and end at the same height (level ground). Also, this doesn't work if there is air resistance on the projectile. You have been warned. Since this case deals with "level ground" and no air resistance (because I said so), then I can easily find the launch velocity to travel a horizontal distance of x2 (assuming it starts at x = 0). For a distance of 25 meters, Neo would have to jump at a 45 degree angle with a launch speed of 15.6 m/s (34.8 mph). That's not humanly possible, but at least it's a slower speed than just running. Change the Gravitational FieldThe Matrix isn't real. So why would anyone have to constrain themselves to real things? Instead of running fast or jumping fast, you could just change the gravitational field. The gravitational field is the force per unit mass on the surface of the Earth. We usually use the symbol "g" for this and it has a typical value of 9.8 Newtons per kilogram. But if you drop (or throw) an object, both the force (weight) and the acceleration depend on the mass of the object in the same way. This means that all falling objects have the same vertical acceleration of 9.8 meters per second squared (which is an equivalent unit to N/kg). If you decrease this gravitational field, you should be able to jump farther. But what value should you use if you can change it? How about looking at the successful building jump by Morpheus? From the video, he takes about 4.2 seconds to complete the jump. If I assume he jumps like a normal human with an upward speed of 3 m/s (this would give a 0.6 second hang time), then the gravitational field would be 1.4 N/kg. Oh, this is about the same gravitational field as the surface of the moon (1.6 N/kg). Maybe that's how Morpheus does it. He just pretends he is on the moon. If you need some homework, how about you repeat these three calculations but include air resistance? That would be fun. More Great WIRED StoriesDigital Trends via Wired https://ift.tt/2uc60ci March 31, 2019 at 06:03AM
https://ift.tt/2HY3v7Q
Grab this sweet Corsair wireless headset for a discount this week https://ift.tt/2YCPDWt All throughout March, Corsair has offered a range of headsets, keyboards, and gaming mice for lower than normal prices on Amazon. As we head into the first week of April, there is still time to pick up one of Corsair’s great headsets on sale. The Corsair HS70 is on sale once again for $80, down from its standard retail price of $100, from March 31 to April 6. The HS70 is the latest in Corsair’s HS series of headsets and comes in three colors: White, carbon, and SE. Though marketed as a PC gaming headset, the HS70 can also be used with PlayStation 4 via the included dongle. The HS70 is fully wireless and features 7.1 surround sound. One of the most important aspects of any gaming headset is its range and accuracy. The HS70 has 50mm neodymium drivers to help you hear intricate noises even during the fiercest of in-game firefights, while offering enough accuracy to hear footsteps and which direction they’re coming from. The HS70 has an impressive range and great bass, so it performs well as a standard audio headset as well. Featuring low-latency 2.4GHz wireless audio, the HS70 has a range of up to 40 feet and can last for 16 hours on a single charge. The unidirectional microphone is designed to pick up your voice while eliminating ambient noise for crisp communication. You can also detach the microphone if you simply want to play with game audio, or if you’re using the headset to listen to music. Volume control and mute buttons are conveniently located directly on the earcups. Built for comfort over the course of long gaming sessions, the HS70’s earcups are made of memory foam and can be adjusted for fit. It’s always difficult to find a headset that remains comfortable through prolonged play, and the HS70 is one of the best in the comfort department, especially in its price range.
If you are looking for a headset for Xbox One or Switch, perhaps check out the HS60. Normally $70, the HS60 has been on sale as of late for around $50 on Amazon. The only real difference between the two models is that the HS60 is wired, not wireless.
We strive to help our readers find the best deals on quality products and services, and choose what we cover carefully and independently. If you find a better price for a product listed here, or want to suggest one of your own, email us at dealsteam@digitaltrends.com.
Digital Trends may earn commission on products purchased through our links, which supports the work we do for our readers.
Digital Trends via Digital Trends https://ift.tt/2p4eJdC March 31, 2019 at 02:35AM
https://ift.tt/2U6OQ1f
Valve to fix ‘deep-rooted issues’ with Artifact instead of releasing updates https://ift.tt/2I1VdMa Valve has decided to pause Artifact updates, as the gaming company will instead focus on fixing the “deep-rooted issues” of the digital card game. Artifact was revealed at The International 2017, instead of the highly anticipated Half-Life 3. The digital card game, with unique gameplay that mimics the mechanics of DOTA 2, was supposed to take on the likes of Hearthstone and Gwent. However, like its announcement, Artifact has so far been a disappointment. “Artifact represents the largest discrepancy between our expectations for how one of our games would be received and the actual outcome,” Valve wrote in a blog post on the game’s official website, breaking its silence as the last update for the game was released in January. The company said that according to feedback from players, it is clear that there are “deep-rooted issues” with the digital card game, and that the original strategy of releasing updates with new features and cards will not be enough to fix the problems. Valve said that it will now focus on re-examining Artifact‘s game design, economy, and social experience, instead of working on updates for the digital card game. The gaming company did not reveal a specific timeline for the endeavor, only sating that it expects the process to take “a significant amount of time.” Artifact was slammed even before it was launched due to its business model. While Hearthstone and Gwent are free-to-play games, Valve’s digital card game has a starting cost of $20 that gives players 10 booster packs, five event tickets, and a pair of starter decks. Initially, players may only acquire new cards by spending real-world money. In addition, players complained about the lack of progression mechanics. Valve made changes to address the backlash on Artifact microtransactions, but they were apparently not enough to maintain player interest. The digital card game has an average of only a little more than 350 concurrent players over the past 30 days, rapidly declining from about 11,200 concurrent players in December, according to Steam Charts. It remains to be seen what changes Valve will implement to fix the problems surrounding Artifact, and if they will be enough to revive interest in the digital card game.
Digital Trends via Digital Trends https://ift.tt/2p4eJdC March 30, 2019 at 07:35PM
https://ift.tt/2FDj4iz
Investigator Jeff Bezos Hired to Look Into Who Got His Dick Pics Says Saudis Broke Into Bezos's Phone https://ift.tt/2FKnLXY Earlier this year, Jeff Bezos beat the National Enquirer to the punch by announcing the dissolution of his marriage—which the tabloid was planning on blowing up anyways in the form of a lengthy expose of his affair with news anchor and media personality Lauren Sanchez. The mildly-interesting-at-best story got a whole lot more interesting, however, when Bezos alleged that the Enquirer and its parent company American Media Inc. had attempted to blackmail and extort him with sexts including a “dick pic” they had kept in reserve. The point of this alleged blackmail, Bezos said, was to derail an investigation into who obtained his sexts, as well as force him to issue a statement denying that the Enquirer’s marching orders came from someone with a political motive and that AMI or anyone associated with it had engaged in hacking. Bezos implied that someone with a vendetta against him, such as Donald Trump or the notoriously authoritarian Saudi Arabian government, could have been involved, though so far the trail has led to Lauren Sanchez’s pro-Trump brother and alleged AMI serial snitcher, Michael Sanchez. Well, on Saturday the security researcher running Bezos’s investigation, Gavin De Becker, wrote in the Daily Beast that his team believes it was indeed the Saudis. According to De Becker, the motive was to hit back at Bezos, who owns the Washington Post, which has understandably been highly critical of the Saudi government’s brutal torture and murder of their colleague, dissident journalist in self-imposed exile Jamal Khashoggi. De Becker outlines several reasons that he believes the Saudis obtained the messages. The first is that AMI appears to have done its best to serve up Michael Sanchez’s name on a platter:
De Becker added how unusual he thought it was that AMI’s alleged blackmail attempt included language specifically asking Bezos to say not only that it had not used “any form of electronic eavesdropping or hacking in their news-gathering process” but that their reporting was not “instigated, dictated or influenced in any manner by external forces, political or otherwise.” Becker also highlighted numerous examples of what he said was the tight relationship between AMI and the Saudi government, including their publication of a gaudy promotional magazine nationwide before Crown Prince Mohammed bin Salman visited the U.S. last year, and AMI chairman David Pecker’s one-on-one meetings with the prince (including one at the White House with the president, the details of which remain private). He also said he had confirmed reports that the Saudi government has access to powerful cyber-intelligence tools that would let them spy on targeted devices; researchers with the Toronto-based Citizen Lab have said they identified this type of spying on dissident Omar Abdulaziz, who was in contact with Khashoggi before his death. De Becker also cited reports that Saudi-funded troll armies had relentlessly attacked Bezos online, including with anti-Semitic rhetoric (Bezos is not Jewish). Pecker’s relationship with the president, described in the piece by Becker, is also well known—Becker received immunity from federal prosecutors for information about Trump’s lawyer Michael Cohen, specifically hush money Cohen paid to women alleging affairs with Trump. Becker was reportedly aware of this because AMI was buying and sitting on the rights to such embarrassing stories about Trump on the latter’s behalf, a practice known as “catch-and-kill.” However, De Becker did not provide hard evidence for his claim that the Saudis orchestrated the incident (and stated he could not confirm whether AMI was necessarily aware of how the compromising Bezos sexts were originally obtained). Instead, he wrote, they had referred their finding that “the Saudis had access to Bezos’s phone, and gained private information” to federal investigators, and alluded to an extensive investigation that supported that conclusion:
Without knowing what De Becker has up his sleeve, most of this boils down to circumstantial evidence. For example, AMI could have demanded the statement denying it hacked anyone could be because any break-in to a computer system, like Bezos’s or Lauren Sanchez’s phones, could potentially violate the Computer Fraud and Abuse Act, and someone at AMI could be nervous it violated the terms of Pecker’s deal with prosecutors. But it’s fair to say that if true, and coming at a time when the Saudi government’s human rights record has come under a renewed wave of scrutiny, that it would have the makings of an international incident. The Saudi government did not return a request for comment from Reuters, but the news agency did note it issued a denial in February 2019, for all that’s worth. Digital Trends via Gizmodo https://gizmodo.com March 30, 2019 at 07:30PM |
Categories
All
Archives
October 2020
|