You are reading the article The Flu Season Is Almost Over. So How Bad Was It? updated in November 2023 on the website Minhminhbmm.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested December 2023 The Flu Season Is Almost Over. So How Bad Was It?
Flu season is still technically not over, but the tail end is the perfect time to look back and ask: how bad was it? Everyone wants the answer to this question when the peak is happening—is it worse than last year? When was the last time it got this bad? How can we be better next year?
These are all important questions, but they’re difficult to answer during flu season. All the data you need lags a couple weeks behind, because doctors have to report flu cases and then someone needs to process and analyze all the information. Often, by the time the Centers for Disease Control has figured it out, everyone else has moved on.
So—now that you’ve likely already forgotten how freaked out you were about the flu just a few weeks ago— here’s a look back at what made this season so bad and what we can do about it in the future:We heard about it more
Overall hospitalizations and deaths might not be so different in sum, but peaks feel more dramatic as they occur. Pediatric deaths, one of the metrics the CDC uses to track flu severity, weren’t all that different from 2014-15 (though we’ll get more data in the coming weeks), but this year still seemed more dangerous.So more people got tested Which is how we know that H3N2 viruses, like this year’s, are especially brutal
Even in the last five years, the quality of our diagnostic tests have improved dramatically. The CDC can look not just at how well animals inoculated with vaccines are able to fight off the circulating virus (that’s the traditional way of testing vaccine effectiveness), but also at the specific genetic changes the surface proteins on the flu are undergoing.
But you know all this already. Here’s what you probably don’t know:H3N2 flu is figuring out how to evade our immune systems and vaccines
All flu viruses mutate to escape our natural defences and vaccines—it’s why we can get infected every year, even if we get the flu shot—but H3N2 has devised a particularly clever way of doing it that we probably wouldn’t know about if it weren’t for our improved diagnostics.
Flu viruses have two key proteins on them, hemagglutinin and neuraminidase. They’re both important, but we’re going to focus on hemagglutinin, or HA. Dan Jernigan thinks of it this way: imagine a flu virus trying to attach to the walls of your respiratory tract as velcro. The HA proteins sticking off of the virus are the prickly part of the velcro, and the hairy part is the surface of your respiratory cells. The HAs have a little pocket on top that allows them to grab onto your cells, inject themselves inside, hijack your body’s DNA replication, and make thousands of copies of their own genomes.
Your immune cells mostly recognize flu viruses based on those HAs. But the H3N2 strain’s HAs have been continually evolving to try to evade the immune system. HAs are becoming more specialized, only grabbing onto very specific proteins on certain respiratory cells, and they’re adding new bits to further confuse immune cells trying to find the flu. Essentially, many of the HAs on H3N2 viral particles now sprout a sort of umbrella of little sugar molecules, obscuring the HA pocket that your immune system would normally latch onto. Now, suddenly, your immune cells can’t recognize the flu as a foreigner.
This is especially bad because that sugar umbrella falls off when we grow the virus in eggs to make vaccines. For some reason it’s no good at surviving in egg cells, so H3N2 loses the protective feature before we get a chance to develop a vaccine capable of recognizing it. Your body can find other proteins by which to identify the flu, but our current vaccines are of limited help.On the bright side, today’s kids might have great protection against H3N2
Jernigan also noted that part of what determines how we respond to a flu virus is which viruses strike us during early childhood. “Older folks that got exposed to H1N1 in 1930s and ‘40s, when it came back around they were relatively protected,” he says. Since they weren’t exposed to H3N2, which is a relatively recent virus, they’re more susceptible to infection.
But that means kids right now are probably developing immunity to H3N2 strains, since that’s been the dominant strain recently. Jernigan thinks kids under age nine might have the best protection, and should this strain come back around when they’re older, they’ll get less sick than adults do today.But we still need better vaccines
In the near term, though, the fact remains that we need better vaccines. Non-egg options are crucial, and while we’re working toward a universal vaccine—one that will keep working year after year—Jernigan says we need to look at increasing efficacy in creative ways. More antigens, adjuvants to help boost immune response to a vaccine, and fine-tuning the viruses inside to make them look more like the circulating types are all important options to consider for the upcoming seasons.
This flu season felt particularly nasty—and in many ways, it was—but we have to remember that people die from the flu every single year. We have to push for improvements all the time, not just the years we think of the flu as a serious problem. Influenza is a worthy opponent. “This virus has been with us for a very long time,” says Jernigan. “The bar is very high, but I’m optimistic that we’ll find solutions.”
You're reading The Flu Season Is Almost Over. So How Bad Was It?
The lawsuit over the terrible MacBook Pro butterfly keyboards has proceeded to a new stage after overcoming a motion to dismiss, bringing the story of Apple’s awful keyboards back into the news. Today, we can see the likely resolution: Apple is floating rumors about a return to the older scissor-style switches, which won’t have the same massive reliability problems. What makes the keyboards so terrible, and why will the rumored fix solve those problems?Why Apple’s Butterfly Keyboards Are Terrible
There are several attributes that make Apple’s butterfly-style keyboards unpleasant to use. Keep in mind that these are community speculation: Apple has not acknowledged the widespread failure of these keyboards directly or stated why they fail so frequently.
Low key travel: the keys barely move when you press them, which is the opposite of the tactile feedback you want on a keyboard. Aside from the poor user experience, this low travel ensures that any particulates, which find their way into the key mechanism, cannot find their way back out.
Delicate mechanism: the key mechanism itself is far more delicate than a standard scissor key or membrane-style key. Thanks to the low key travel, there’s little room for excess or error. Because we use keyboards percussively, delicate switches make for fragile keyboards.
Poor dust management: on many keyboards, including Apple’s butterfly keyboards, the keys work like a pump, actively sucking air and any nearby dust into the key mechanism. Normally, this wouldn’t be a huge issue, but thanks to the low travel and delicate key system, even a small, moderately sticky crumb can jam a key permanently. While the silicone condom wrapping each key stem on newer models helps, it does not solve the problem entirely.
This jamming can happen on any key, but it occurs most frequently on the 1p-sized keys used for letters, numerals, and punctuation. The largest keys are actually the most durable, since they have the greatest distance between the key edge and the actuating mechanism, allowing dust to enter the key without jamming the switches unduly.How Can Scissor Switches Fix Apple’s Keyboards?
We don’t know much about Apple’s planned design for their scissor-style keys. However, we can look at Apple’s previous keyboards. The keyboards available on pre-2023 MacBook Pros were reliable and functional, with scissor-style switches likely built with commodity parts. Simply returning to that design with zero changes would be a major upgrade over Apple’s current laptop keyboards. But why?
Mostly, it comes down to their physical design differences. Scissor-style keys require more key travel to function. That’s a big plus for tactile feedback, but it also could help reliability. Greater key travel makes it easier for particles to leave the key if they get trapped. Extended travel also permits a higher key actuation point, meaning you can perform a keystroke without bottoming out the switch. Even if some small piece of food getslodged in the mechanism, a scissor-style key would likely still function, since it doesn’t need to depress to the absolute bottom of its travel to function.
Scissor switches are also far less delicate, standing up to far worse treatment. The more robust scissor-style mechanism makes it easier to crush or eject particles blocking key travel. Plus, users like them. All those pluses have made scissor switches the industry-standard choice for most laptop manufacturers. They’re thin, users don’t mind them, and they work reliably.
The “downside” of scissor switches is their greater height. It’s that requirement that convinced Apple to shift towards their own design. But given the choice between a thin keyboard and a reliable keyboard, most users would prefer the latter.What’s Next?
We can speculate on why Apple created their own proprietary butterfly mechanism, but the results of their experiment have been overwhelmingly negative. If Apple has a single competent hardware designer, they’ll seriously consider a switch away from their ill-fated butterfly keyboards. With the right design, Apple can return to scissor-style switches, and their laptop users will rejoice.
Alexander Fox is a tech and science writer based in Philadelphia, PA with one cat, three Macs and more USB cables than he could ever use.
Subscribe to our newsletter!
Our latest tutorials delivered straight to your inbox
Sign up for all newsletters.
Python 3.11 is the first release to benefit from a project called Faster CPython!
The Python programming language releases new versions yearly, with a feature-locked beta release in the first half of the year and the final release toward the end of the year. The feature set for Python 3.11 has just been finalized, with a beta version available for testing. Developers are encouraged to try out this latest version on non-production code, both to verify that it works with your programs and to get an idea of whether your code will benefit from its performance enhancements.Let’s look into the features in Python 3.11 Typing improvements
Python‘s type-hinting features make larger codebases easier to manage and analyze and have increased significantly with each revision since Python 3.5. Python 3.11 brings in several new type-hinting additions.The Self type
Class methods that return self previously required obtuse and verbose annotations to be useful. typing. Self lets you annotate the return value of a class method as, simply, Self. You get useful and predictable results from your analysis tools for such methods.CPython Optimizations
CPython is the reference implementation of the Python programming language. Written in C and Python, CPython is the default and most widely used implementation of the Python language. In version 3.11, the CPython interpreter is much more optimized and much faster than in version 3.10. CPython 3.11 is on average 1.22x faster than CPython 3.10 when measured with the performance benchmark suite, and compiled with GCC on Ubuntu Linux. Depending on your workload, the speedup could be up to 10–60% faster. In Python 3.11, the developers have mostly focused on faster startup and faster runtime as has been stated in the documentation.Arbitrary string literal type
Previously, type annotations had no way to indicate a given variable needed to be a string literal—that is, a string defined in source code. The new typing.LiteralString annotation fixes that. Using the new annotation, linters can test for a variable is either a string defined in the source or a new string composed of only source-defined strings.Python 3.11: I am Speed
Every new version comes with lots of improvements and the same goes with Python 3.11. One of the features (Speed) that every developer was waiting for is finally here. Since an object’s type rarely changes, the interpreter now attempts to analyze running code and replace general bytecodes with type-specific ones. For instance, binary operations (add, subtract, etc.) can be replaced with specialized versions for integers, floats, and strings.How Python 3.11 is gaining performance?
Python function calls also require less overhead in Python 3.11. Stack frames for function call now use less memory and are more efficiently designed. Also, while recursive calls aren’t tail-optimized (which probably isn’t possible in Python, anyway), they are more efficient than in previous versions. The Python interpreter itself also starts faster, and core modules needed for the Python runtime are stored and loaded more efficiently.
Python 3.11 is the first release to benefit from a project called Faster CPython, where CPython is the standard version of the interpreter. Faster CPython is a project funded by Microsoft, whose members include Python inventor Guido van Rossum, Microsoft senior software engineer Eric Snow, and Mark Shannon – who is under contract to Microsoft as tech lead for the project.
A session scheduled for the EuroPython event to be held in Dublin in July centers on some of the changes that enable the speed-up. Shannon will describe the “adaptive specializing interpreter” in Python 3.11, which is PEP (Python Enhancement Proposal) 659. This describes a technique called specialization which, Shannon explains, “is typically done in the context of a JIT [just in time] compiler, but research shows specialization in an interpreter can boost performance significantly.”
The interpreter identifies code that can benefit from specialization and “once an instruction in a code object has executed enough times, that instruction will be “specialized” by replacing it with a new instruction that is expected to execute faster for that operation,” states the PEP. The speed-up can be “up to 50 percent.”
Bitcoin, the largest crypto token has traveled a long journey since its inception. The perceptions about these digital assets have moved from those of a pessimistic nature to that of a more optimistic one. At press time, it was trading just shy of the $48,000 mark.
Nonetheless, Bitcoin has suffered a significant amount of volatility in the past. Well, speculations, risks and regulations have mainly contributed to this.Biggest risks involved
Crypto veteran Anthony Pompliano recently discussed about the flagship token in a podcast, putting forward his hypothetical long-term bear case. He mainly highlighted five key FUDs in the interview. He noted that unlike what the Bitcoin proponents believe, the coin won’t cross over into becoming a currency.
“The most it can grow to is gold, which is basically a store of value. [It] isn’t really used to go purchase things on a day-to-day basis and therefore, yes there may be more upside, but it’s kind of a capped upside.”
BTC vs Gold has been an ongoing long-term debate. Even though, the token recorded impressive runs against the precious metal, the latter still lies far ahead in terms of valuation. The graph below highlights the same,
“The second one is eventually we’re gonna figure out who Satoshi is and if we figure out who Satoshi is, that’s going be a bad person, and we’re not gonna want to know who it is and like there’s gonna be a negative impact.”
Meanwhile, BTC’s real-life uses-cases like cross-border payments and other related matters were talked about as well.
“It’s slow. It’s expensive. It’s kind of all these like technical issues with it.”
Moving on, crypto regulations have played a part to put pressure on its value. The uncertainty surrounding the token caused interested individuals to veer away. According to Pompliano, the market was going to get regulated, taxed and eventually shut down by outlawing it.
Lastly, he noted another particular risk that could undermine the integrity of the Bitcoin network. According to Pompliano,
“The best argument somebody could make and what I think is the biggest risk to Bitcoin – it really pisses the critics off because it has nothing to do with anything external.”
He also mentioned how an ecosystem like that has to undergo a development process – which is methodical with filters and security checks in place. However, if there is a bug introduced in the code it would be like “shooting yourself in the foot.”
Nevertheless, in the longer term he remained bullish irrespective of the FUDs in the market.
“The reason why Bitcoin is so attractive is I actually don’t need to have the best returning asset. I don’t need to be greedy. I want the thing that has a great economic return, but also I know it’s gonna be around in 50 years.”
No one, including the government, could possibly stop crypto to exist. “Not even a nuclear war can shut it,” he opined. In addition to this, Tesla chief and billionaire investor Elon Musk too had similar thoughts on the subject at the Code Conference.
Whereas, Mike Alfred– the CEO of BrightScope & Digital Assets Data tweeted,
The biggest risk is not that #bitcoin will go to zero, but that you’ll sell your #bitcoin too soon.
— Mike Alfred (@mikealfred) October 2, 2023
As of now, bitcoin has recovered some ground but still stands below $48,000. Nevertheless, its market capitalization is just shy of $900 billion at press time, and the dominance over the alternative coins sits at 42.5%.
Surprisingly, the Vive and the Quest 2 occupy starkly different areas of the market, with the former costing hundreds of dollars more than the latter, leading to a lot of VR junkies asking what the big idea is.
You can check the price yourself here: HTC Vive Virtual Reality System
At first, this price difference seems fairly outlandish. They both do roundabout the same things, right? But once you take a closer look at some of the technical specs of these two devices, the hefty Vive price tag seems a lot more reasonable. Let’s discuss why…Refresh Rates
The same principles apply to computer displays. The higher the refresh rate, the smoother the picture and motion will appear. Of course, a high refresh rate is nice to have on any screen, but it’s especially important when it comes to VR.
The Vive and the Vive Pro have a 90Hz refresh rate, and the Pro 2 boasts a whopping 120Hz refresh rate.
On a regular computer screen, the difference between the Oculus Quest 2’s 75Hz and the Vive’s 90Hz refresh rate might be fairly negligible, but during VR, that difference allows you to play harder for longer.Tracking
When a computer senses your physical movements, then represents and displays those movements on screen, it’s known as tracking.
VR units need to have incredibly low latency and accurate tracking, otherwise, the VR spell is well and truly broken. If the response is anything other than immediate, players would lose interest in minutes.
It’s a close call, but the HTC Vive and Co. have the most accurate tracking on the market, which ultimately means that they provide the most immersive and realistic gaming experience.
Using SteamVR tracking, the Vive captures your movement almost perfectly, only ever being off by a couple of centimeters here and there.Field Of View
The field of view in a virtual reality game is the maximum observable environment you can see at any one moment, and once again, the HTC Vive has the edge on its main competition, the Oculus Quest 2.
The original Vive, the Vive Pro, and the Pro 2 offer 110° of visible surroundings at all times, while the Quest 2 only lets you see 100°. This means you’re literally getting more virtual world from the Vive, and as you’d imagine, more world means more money.Display
The Original HTC Vive has a 2160 x 1200 resolution. The Oculus Quest 2’s resolution tops out at 1832 x 1920. So, all in all, the Vive has more pixels, which means a crisper, more detailed picture.
In terms of display design, the Vive has an edge once again. The Quest 2 has a liquid crystal display, whereas the Vive has an OLED display, offering sharper contrast ratios and truer blacks.
In addition, the Vive Pro 2 blows both these VR stalwarts out of the water with its 2448 x 2448 resolution.Battery Life
Here’s one area in which the Vive drops the virtual ball, offering only 2.5 hours of battery life, but as it can also be PC powered, battery life isn’t such an issue. Besides, HTC still generally dominates in terms of battery power.
Take the HTC Vive Cosmos Elite, for instance. This VR beast’s battery can last as long as 8 hours. That’s one hell of a virtual reality session!
The Oculus Quest 2 will only ever last for 3 hours max, which is still great. 3 hours is plenty to really get stuck in and enjoy some virtual worlds, but it doesn’t even compare to the Cosmos Elite. Naturally, the Elite costs more.Original Programming
The Vive was the first of its kind within the companies responsible for its creation. The programming was all 1st generation, built up from scratch, and building something this complex from the ground up with no blueprints to work from takes a lot of manpower.
HTC and Valve would have had to hire a vast amount of specialists to get the Vive project up and running, and everyone that worked on it needs to be paid, thus, the price of the Vive rises a little higher.
Nowadays, the original Vive is becoming quite difficult to come by, so even though we’re on to the Pro 2, prices remain high due to scarcity.Summing Up
There you have it, folks. Sure, the HTC Vive was, and still is, expensive, but is the price tag unreasonable? I don’t think so.
Once you factor in all the aspects of its design and that it was basically a prototype VR device built from scratch, the price tag doesn’t sting so much. You feel that you might actually be getting what you pay for, and let’s be honest, you’re getting a lot.
Outside other HTC and Valve products, a 90Hz VR experience is still a rarity today, so hats off to HTC, and hats off to the Vive. If you can afford one, you’re going to have an absolute blast!
TapBots’ beloved Twitter client Tweetbot has finally arrived for the Mac, following an extensive period of beta testing since July. The bad news is, it will run you a whopping twenty bucks a pop! It’s not that developers have become greedy overnight, mind you. As you know, Twitter has capped their user base in a quest to exercise total control of third-party programs.
Twitter is doing so by enforcing token limits upon third-party developers. Tokens determine how many users an app like Tweetbot for Mac can have. As a result, developers get to only sell the app until they use up all the tokens Twitter allocated.
That’s the official line. Some people think it’s crap, others point the finger of blame at Twitter. You could call it economics, I guess. No matter how you look at it, Tweetbot for Mac – at least to my knowledge – has officially become the priciest Twitter client on the Mac App Store…
TapBots explain the commotion in a blog post announcing the official release of Tweetbot for Twitter (that’s the official name):
Because of Twitter’s recent enforcement of token limits, we only have a limited number of tokens available for Tweetbot for Mac. These tokens dictate how many users Tweetbot for Mac can have.
The app’s limit is separate from, but much smaller than, the limit for Tweetbot for iOS. Once we use up the tokens granted to us by Twitter, we will no longer be able to sell the app to new users. Tapbots will continue to support Tweetbot for Mac for existing customers at that time.
So pricing the app at twenty bucks was the way to go, eh?
This limit and our desire to continue to support the app once we sell out is why we’ve priced Tweetbot for Mac a little higher than we’d like. It’s the best thing we can do for the long term viability of the product.
We know some will not be happy about Tweetbot for Mac’s pricing, but the bottom line is Twitter needs to provide us with more tokens for us to be able to sell at a lower the price.
Is there anything you can do about it?
Sure, call your senator or “feel free to let Twitter know how you feel about it”.
A few screenies follow.
And if you think $20 for a desktop Twitter client is too much, it really isn’t.
If you think about it, it’s not that expensive. Twenty dollars for a quality piece of software that you use every day? That has been the price point for quality utility apps on the Mac for years. However, it’s not just the development time and attention we put into the app that commands the higher price.
The final version enables brilliant iCloud sync of your timeline position across iOS and Mac versions and integrates with Mountain Lion’s Notification Center.
Yes, it supports Retina graphics as well.
Brief highlights, via iTunes:
◆ Save drafts, add locations and POI’s, attach photos/videos, manage your lists, and much more.
So, will you be taking the plunge?
I’m not so sure.
Not at this price, not in this economy.
Update the detailed information about The Flu Season Is Almost Over. So How Bad Was It? on the Minhminhbmm.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!