Trending December 2023 # Who’S Air Quality Index Shows Life Expectancy In India Down By 4 Years # Suggested January 2024 # Top 19 Popular

You are reading the article Who’S Air Quality Index Shows Life Expectancy In India Down By 4 Years updated in December 2023 on the website We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested January 2024 Who’S Air Quality Index Shows Life Expectancy In India Down By 4 Years

India is the world’s second most polluted country, slightly trailing only Nepal, the Energy Policy Institute at the University of Chicago (EPIC) said on Monday.

This is up from about two years in the late 1990s due to a 69 per cent increase in particulate pollution, it said.

Concentrations in Indian states of Bihar, Uttar Pradesh, Haryana, Punjab, and the National Capital Territory of Delhi are substantially higher, and the impact on life expectancy exceeds six years.

The tool gives figures like — for an average resident of Delhi, gain in life expectancy if the WHO guidelines are met, could be up to 10.2 years.

Likewise, it gives numbers of years lost to pollution for every district of India for a span of 18 years between 1998 and 2023.

What makes AQLI unique is that it converts pollution into perhaps the most important metric that exists — life expectancy. It does so at a hyper-local level throughout the world.

Further, it illustrates how air pollution policies can increase life expectancy when they meet the World Health Organization’s (WHO) guideline, existing national air quality standards, or user-defined air quality levels.

Loss of life expectancy is highest in Asia, exceeding six years in many parts of India and China; some residents of the US still lose up to a year of life from pollution.

Fossil fuel-driven particulate air pollution cuts global average life expectancy by 1.8 years per person, according to the pollution index and accompanying report produced by the EPIC.

“Around the world today, people are breathing air that represents a serious risk to their health. But the way this risk is communicated is very often opaque and confusing, translating air pollution concentrations into colors, like red, brown, orange, and green. What those colors mean for people’s wellbeing has always been unclear,” Michael Greenstone, the Milton Friedman Professor in Economics and Director of the EPIC, said.

Greenstone also noted: “My colleagues and I developed the AQLI, where the ‘L’ stands for ‘life’ to address these shortcomings. It takes particulate air pollution concentrations and converts them into perhaps the most important metric that exists, life expectancy.”

The results from these studies are then combined with hyper-localised, global particulate matter measurements, yielding unprecedented insight into the true cost of air pollution in communities around the world.

Seventy-five per cent of the global population, or 5.5 billion people, live in areas where particulate pollution exceeds the WHO guideline.

The AQLI reveals that India and China, which make up 36 per cent of the world’s population, account for 73 per cent of all years of life lost due to particulate pollution.

On average, people in India would live 4.3 years longer if their country met the WHO guideline, expanding the average life expectancy at birth there from 69 to 73 years.

Those living in the country’s most polluted counties could expect to live up to one year longer if pollution met the WHO guideline.

Globally, the AQLI reveals that particulate pollution reduces average life expectancy by 1.8 years, making it the greatest global threat to human health.

Other risks to human health have even smaller effects: alcohol and drugs reduce life expectancy by 11 months; unsafe water and sanitation take off seven months; and HIV/AIDS four months.

“While people can stop smoking and take steps to protect themselves from diseases, there is little they can individually do to protect themselves from the air they breathe,” Greenstone said.

You're reading Who’S Air Quality Index Shows Life Expectancy In India Down By 4 Years

Who Is The Largest Debt Buyer?

A debt buyer is a corporation that acquires loans from lenders at a reduced price. Debt buyers, including collection agencies and private sector debt collectors, purchase overdue or charged-off loans from debt sellers for a percentage of their face value. The debt buyer subsequently recovers the money either by themself or by employing a collection agency or reselling parts of the debt, or by a combination of these options.

Debt collection is a complex and wide-ranging industry. The debt collection industry is big—an estimated $18 billion industry, in fact — and growing. Thousands of debt collectors exist, including collection agencies, law firms, and debt buyers. According to IBISWorld’s Debt Collection Business study, the industry employs more than 130k people.

Yahoo Finance has outlined the current trends in the industry in their United States Debt Collection Agencies Market Report.

Understanding Debt Buyers

Debt buyers typically pay a relatively small fraction of the debt’s face value – occasionally as little as pennies on the dollar. Debt buyers can be tiny, privately held organizations or huge publicly traded corporations. They are categorized as active if they attempt to collect the debt directly or passive if they use a collection agency or legal firm to recover a loan. The debt-buying industry is worth billions of dollars.

Also read:

Top 10 Business Intelligence Tools of 2023

Why are Debt Buyers Used?

If a creditor, such as a mortgage provider or financial firm, becomes unable to collect the money on an existing debt under the conditions of its funding, it may seek to reclaim a portion of its loss. In certain cases, a lender has little or no chance of recovering the funds within the time range specified when the loan or credit was issued.

After acquiring ownership of the account receivables, the debt buyer can undertake several techniques to regain residual value. This might involve settling on new repayment conditions with the borrower or employing fresh collection strategies through a collection agency to force payments.

The debt buyer may collect on their own, hire a collection agency, repackage and sell pieces of the bought portfolio, or employ any combination of these strategies.

Who is the Largest Debt Buyer?

In the United States, there are around 10,000 debt collectors and debt buyers.

Encore Capital Group, based in San Diego, CA, and its affiliates comprise the country’s largest debt buyer and collector. Encore, as a debt buyer, buys overdue or charged-off debts at a discount to the debt’s face value.

Despite paying a lower amount for the loan, they may try to collect the entire amount demanded by the initial loan company. They acquire the right to claim overdue consumer debts such as credit cards, phone bills, and other accounts.

Encore Capital Group and its affiliates are the nation’s largest debt buyers and collectors, having businesses and interests in North America, Europe, Asia, and South America.

Encore Capital Group is a global specialized financial firm that offers debt collection as well as other associated services to customers dealing with a wide variety of financial assets. Encore buys retail debt portfolios from financial institutions, credit unions, and utility suppliers through its subsidiaries across the world.

It buys pools of delinquent consumer debts at steep discounts on face value and administers them by assisting individuals in repaying their commitments and regaining financial stability. Defaulted receivables are unfulfilled financial obligations made by customers to credit originators such as banks, credit unions, consumer financing businesses, and retail shops.

Receivables in default could also include those susceptible to insolvency procedures.

Encore works with people to assist them in paying off their debts, putting them on the path to financial rehabilitation and, eventually, enhancing their financial health. Encore is the only firm of its type to function under a Consumer Bill of Rights, and it offers customers top-line services.

Encore is a NASDAQ Global Select corporation (NASDAQ: ECPG) and a member stock of the Russell 2000, S&P Small Cap 600, and Wilshire 4500 indices.

In 2023, the company’s revenue was $1.40 billion, a decrease from $1.48 billion in 2023. FY2023 US GAAP net income of $351 million was up 66% year on year.

It moreover offers loan servicing as well as other portfolio management services to debt issuers in Europe for non-performing loans. Via Midland Credit Management, Inc., they are involved in debt portfolio acquisition and collection in the United States. Cabot Credit Management Limited provides debt management solutions in Europe and the UK.

Who Would Actually Buy Microsoft’S Xbox Division?

Who would actually buy Microsoft’s Xbox division?

Last week, a report surfaced suggesting that Stephen Elop, the man who many believe will take over for Steve Ballmer as Microsoft’s next chief executive, would consider selling off the software giant’s Xbox division to focus its efforts on Office and other key platforms that tend to make it more chúng tôi announcement was a surprise, if nothing else, and it speaks to where Elop’s thinking is right now. Elop might appreciate Microsoft’s many businesses and what they offer to customers, but he believes that the company has gone too far into different operations and has lost sight of the divisions that matter.

While I can agree with such a sentiment to some degree, I can’t help but wonder who in the world would buy Microsoft’s Xbox division. After all, it’s one thing to say that you want to sell a division, it’s quite another to actually complete that sale.

Regardless, we need to work on the assumption that Elop would possibly sell off the Xbox division to the highest bidder. Given that framework, I need to ask: who would buy the company’s gaming business?

At first blush, someone might think that Sony would make the move. After all, Microsoft has been a thorn in Sony’s side for years and Xbox Live, alone, could prove to be an important asset to the PlayStation maker as it attempts to build out its online-gaming services.

In addition, gaming has become a central tenet in Sony’s plans for rebirth. And since it still generates billions of dollars every quarter, it could conceivably afford Xbox’s division.

The big question with Sony, however, is whether investors would allow such a dramatic move. Sony’s business is by no means in the clear and spending what would undoubtedly be billions of dollars on a gaming business might not make financial sense right now.

[aquote]Google wants to score big in the living room[/aquote]

Which brings us to Google. This one might be a longshot, but Google at the very least looks like a company that wants to score big in the living room. And what better way to achieve that goal than by buying a division from Microsoft that already has a presence there. Google could certainly afford Microsoft’s Xbox division, but given the companies’ bitter battle, it’s unlikely even Elop would sell the Xbox business to Google.

But what about Apple? According to several reports, Apple is at least considering making a play for the gaming business with its next set-top box (or whatever it has planned). Wouldn’t it make sense for Apple to buy the Xbox division to build out those services more quickly?

So, I’m not quite sure what company would actually get its hands on the Xbox division. Yes, there are some candidates, but plausible examples can be made in each case where it wouldn’t make sense to make the move.

Whether Elop likes it or not, he might just be stuck with the Xbox division.

Semantic Search: How It Works & Who It’s For

For simple user queries, a search engine can reliably find the correct content using keyword matching alone.

A “red toaster” query pulls up all of the products with “toaster” in the title or description, and red in the color attribute.

Add synonyms like maroon for red, and you can match even more toasters.

But things start to become more difficult quickly: You have to add these synonyms yourself, and your search will also bring up toaster ovens.

This is where semantic search comes in.

Semantic search attempts to apply user intent and the meaning (or semantics) of words and phrases to find the right content.

It goes beyond keyword matching by using information that might not be present immediately in the text (the keywords themselves) but is closely tied to what the searcher wants.

For example, finding a sweater with the query “sweater” or even “sweeter” is no problem for keyword search, while the queries “warm clothing” or “how can I keep my body warm in the winter?” are better served by semantic search.

As you can imagine, attempting to go beyond the surface-level information embedded in the text is a complex endeavor.

It has been attempted by many and incorporates a lot of different components.

Additionally, as with anything that shows great promise, semantic search is a term that is sometimes used for search that doesn’t truly live up to the name.

What Are The Elements Of Semantic Search?

Semantic search applies user intent, context, and conceptual meanings to match a user query to the corresponding content.

These components work together to retrieve and rank results based on meaning.

One of the most fundamental pieces is that of context.


The context in which a search happens is important for understanding what a searcher is trying to find.

Context can be as simple as the locale (an American searching for “football” wants something different compared to a Brit searching the same thing) or much more complex.

An intelligent search engine will use the context on both a personal level and a group level.

The personal level influencing of results is called, appropriately enough, personalization.

Personalization will use that individual searcher’s affinities, previous searches, and previous interactions to return the content that is best suited to the current query.

It is applicable to all kinds of searching, but semantic search can go even further.

Again, this displays how semantic search can bring in intelligence to search, in this case, intelligence via user behavior.

Semantic search can also leverage the context within the text.

We’ve already discussed that synonyms are useful in all kinds of search, and can improve keyword search by expanding the matches for queries to related content.

But we know as well that synonyms are not universal – sometimes two words are equivalent in one context, and not in another.

When someone searches for “football players”, what are the right results?

The answer will be different in Kent, Ohio than in Kent, United Kingdom.

A query like “tampa bay football players”, however, probably doesn’t need to know where the searcher is located.

Adding a blanket synonym that made football and soccer equivalent would have led to a poor experience when that searcher saw the Tampa Bay Rowdies soccer club next to Ron Gronkowski.

(Of course, if we know that the searcher would have preferred to see the Tampa Bay Rowdies, the search engine can take that into account!)

This is an example of query understanding via semantic search.

User Intent

The ultimate goal of any search engine is to help the user be successful in completing a task.

That task might be to read news articles, buy clothing, or find a document.

The search engine needs to figure out what the user wants to do, or what the user intent is.

We can see this when searching on an ecommerce website.

As the user types the query “jordans”, the search automatically filters on the category, “Shoes.”

This anticipates that the user intent is to find shoes, and not jordan almonds (which would be in the “Food & Snacks” category).

By getting ahead of the user intent, the search engine can return the most relevant results, and not distract the user with items that match textually, but not relevantly.

This can be all the more relevant when applying a sort on top of the search, like price from lowest to highest.

This is an example of query categorization.

Categorizing the query and limiting the results set will ensure that only relevant results appear.

Difference Between Keyword And Semantic Search

We have already seen ways in which semantic search is intelligent, but it’s worth looking more at how it is different from keyword search.

While keyword search engines also bring in natural language processing to improve this word-to-word matching – through methods such as using synonyms, removing stop words, ignoring plurals – that processing still relies on matching words to words.

But semantic search can return results where there is no matching text, but anyone with knowledge of the domain can see that there are plainly good matches.

This ties into the big difference between keyword search and semantic search, which is how matching between query and records occurs.

To simplify things some, keyword search occurs by matching on text.

“Soap” will always match “soap” or “soapy ”, because of the overlap in textual quality.

More specifically, there are enough matching letters (or characters) to tell the engine that a user searching for one will want the other.

That same matching will also tell the engine that the query soap is a more likely match for the word “soup” than the word “detergent.”

That is unless the owner of the search engine has told the engine ahead of time that soap and detergent are equivalents, in which case the search engine will “pretend” that detergent is actually soap when it is determining similarity.

Keyword-based search engines can also use tools like synonyms, alternatives, or query word removal – all types of query expansion and relaxation – to help with this information retrieval task.

NLP and NLU tools like typo tolerance, tokenization, and normalization also work to improve retrieval.

While these all help to provide improved results, they can fall short with more intelligent matching, and matching on concepts.

Semantic Search Matches On Concepts

Because semantic search is matching on concepts, the search engine can no longer determine whether records are relevant based on how many characters two words share.

Again, think about “soap” versus “soup” versus “detergent.”

Or more complex queries, like “laundry cleaner”, “remove stains clothing”, or “how do I get grass stains out of denim?”

You can even include things like image searching!

A real-world analogy of this would be a customer asking an employee where a “toilet unclogged” is located.

An employee with only a pure keyword-esque understanding of the request would fail it unless the store explicitly refers to their plungers, drain cleaners, and toilet augers as “toilet uncloggers.”

But, we would hope, the employee is wise enough to make the connection between the various terms and direct the customer to the right aisle.

(Perhaps the employee knows the different terms, or synonyms, a customer can use for any given product).

A succinct way of summarizing what semantic search does is to say that semantic search brings increased intelligence to match on concepts more than words, through the use of vector search.

With this intelligence, semantic search can perform in a more human-like manner, like a searcher finding dresses and suits when searching fancy, with not a jean in sight.

What Is Semantic Search Not?

By now, semantic search should be clear as a powerful method for improving search quality.

As such, you should not be surprised to learn that the meaning of semantic search has been applied more and more broadly.

Often, these search experiences don’t always warrant the name.

And while there is no official definition of semantic search, we can say that it is search that goes beyond traditional keyword-based search.

It does this by incorporating real-world knowledge to derive user intent based on the meaning of queries and content.

It’s true, tokenization does require some real-world knowledge about language construction, and synonyms apply understanding of conceptual matches.

However, they lack, in most cases, an artificial intelligence that is required for search to rise to the level of semantic.

Powered By Vector Search

It is this last bit that makes semantic search both powerful and difficult.

Generally, with the term semantic search, there is an implicit understanding that there is some level of machine learning involved.

Almost as often, this also involves vector search.

Vector search works by encoding details about an item into vectors and then comparing vectors to determine which are most similar.

Again, even a simple example can help.

Take two phrases: “Toyota Prius” and “steak.”

And now let’s compare those to “hybrid.”

Which of the first two are more similar?

Neither would match textually, but you probably would say that “Toyota Prius” is the more similar of the two.

You can say this because you know that a “Prius” is a type of hybrid vehicle because you have seen “Toyota Prius” in a similar context as the word hybrid, such as “Toyota Prius is a hybrid worth considering,” or “hybrid vehicles like the Toyota Prius.”

You’re pretty sure, however, you’ve never seen “steak” and ”hybrid” in such close quarters.

Plotting Vectors To Find Similarity

This is generally how vector search works as well.

A machine learning model takes thousands or millions of examples from the web, books, or other sources and uses this information to then make predictions.

Of course, it is not feasible for the model to go through comparisons one-by-one ( “Are Toyota Prius and hybrid seen together often? How about hybrid and steak?”) and so what happens instead is that the models will encode patterns that it notices about the different phrases.

It’s similar to how you might look at a phrase and say, “this one is positive” or “that one includes a color.”

Except in machine learning the language model doesn’t work so transparently (which is also why language models can be difficult to debug).

These encodings are stored in a vector or a long list of numeric values.

Then, vector search uses math to calculate how similar different vectors are.

Another way to think about the similarity measurements that vector search does is to imagine the vectors plotted out.

This is mind-blowingly difficult if you try to think of a vector plotted into hundreds of dimensions.

If you instead imagine a vector plotted into three dimensions, the principle is the same.

These vectors form a line when plotted, and the question is: which of these lines are closest to each other?

The lines for “steak” and “beef” will be closer than the lines for “steak” and “car” , and so are more similar.

This principle is called a vector, or cosine, similarity.

Vector similarity has a lot of applications.

It can make recommendations based on the previously purchased products, find the most similar image, and can determine which items best match semantically when compared to a user’s query.


Semantic search is a powerful tool for search applications that have come to the forefront with the rise of powerful deep learning models and the hardware to support them.

While we’ve touched on a number of different common applications here, there are even more that use vector search and AI.

Even image search or extracting metadata from images can fall under semantic search.

We’re in exciting times!

And, yet, its application is still early and its known powerfulness can lend itself to a misappropriation of the term.

There are many components in a semantic search pipeline, and getting each one correct is important.

When done correctly, semantic search will use real-world knowledge, especially through machine learning and vector similarity, to match a user query to the corresponding content.

More resources:

Featured Image: magic pictures/Shutterstock

Who Will Win The Battle Of The Streaming Services

Even just twenty years ago we wouldn’t have expected our entertainment needs to take this path. Everything is streamed whether it’s movies, TV, or music. Streaming services are becoming nearly as important as utilities in our home. So now they seem to be directly taking each other on. But who will win the battle of the streaming services?

Let’s start with music. Some people eschew downloading songs and buying CDs in favor of paying a monthly subscription price to listen to whatever they want. Spotify seemed to be the leading contender for years, but now Apple Music has entered the fray. Spotify is now launching a new feature, Release Radar, that seems aimed at the Apple Music subscribers. It makes it easier for listeners to find new music that fits their tastes. But Apple’s not taking that laying down. They are rolling out an all-new interface soon that makes it easier to find music as well.

And then there are TV and movies. There are many people that are foregoing cable and satellite and just streaming movies and their TV shows. Netflix seemed to be the place for movies and Hulu the place for TV shows, but now Netflix is offering first-run TV series, and Hulu has just announced that they won’t offer as much television content anymore in what seems like it’s placing itself in battle with Netflix. However, Apple has announced that they will begin producing first-run television shows as well, and let’s not leave Amazon Prime out of this battle.

What’s going to happen in all this competition? Will there be any winners or will this just promote more battles between the opposing services? Is this beneficial to the viewing/listening audience? Or is this something that doesn’t help viewers and listeners at all and only serves to make these big tech companies bigger?

Laura Tucker

Laura has spent nearly 20 years writing news, reviews, and op-eds, with more than 10 of those years as an editor as well. She has exclusively used Apple products for the past three decades. In addition to writing and editing at MTE, she also runs the site’s sponsored review program.

Subscribe to our newsletter!

Our latest tutorials delivered straight to your inbox

Sign up for all newsletters.

By signing up, you agree to our Privacy Policy and European users agree to the data transfer policy. We will not share your data and you can unsubscribe at any time.

How (And Who) To Ask For A Letter Of Recommendation

Letters of recommendation often make or break a graduate school application. It’s important to think carefully about who to ask and how to do it.

Follow these five steps to guarantee a great recommendation, including program-specific tips and email examples.

Step 1: Choose who to ask

Your first step is to decide who you’ll ask to write a letter for you. Ideally, this should be someone who you worked with outside of just the classroom context—for example, a former professor who supervised your research.

It’s important to ask someone who knows you well, even if they are less well known than other professors at your institution. Graduate admissions committees want to get a good sense of your ability to perform well in their program, and this is difficult to accomplish if your recommender only knows you as a face in the crowd.

Who you should ask also strongly depends on the type of program that you’re applying to. Different programs prefer different qualities in their admitted students, and thus weigh types of recommenders differently. Take a look at the program-specific tips below.





For research programs (MPhil, DPhil, PhD, Research Master’s), graduate admissions committees are looking for evidence of your potential as a future researcher.

Since this is tricky to assess from test scores and transcripts, letters of recommendation are often the most important part of a graduate research program application.

Your letter should thus be from someone who can speak to your skills as a researcher. This could be, for example, a professor who supervised you on an independent research project, or the head of a lab that you worked in as an undergraduate.

If you worked as a full-time research or lab assistant after undergrad, ask your managers, who are usually full-time researchers themselves and therefore experts on what makes a good researcher.

Unlike most graduate programs, business schools are less interested in your undergraduate academic performance. Instead, they try to assess your potential to succeed in the workplace, particularly in managerial or leadership positions. The same applies to public policy and other professional programs.

Ideally, your letters of recommendation should come from current supervisors at your work. If this isn’t possible, you should ask coworkers who are senior to you and know your work well.

Although business schools normally prefer candidates with several years of experience, current undergraduates sometimes apply as well. In this case, you should ask internship supervisors or—as a last resort—professors who know you well.

Medical schools look for evidence that you are academically prepared for the study of medicine and that your character is well-suited to becoming a doctor. Admissions committees in medicine prefer academic references, but they also require a few extra steps.

Firstly, while graduate programs usually require two or three recommendation letters, medical schools often ask for more—you may have to submit up to six letters, some of which should be from former professors in the natural sciences.

Finally, if you’ve worked on any research projects, you should submit a letter from your supervisor. Medical schools view research competence as a plus.

Law school letters of recommendation should mostly be from former professors or other academic supervisors.

You should only use non-academic recommenders if they can directly speak to your suitability to study law—for example, if you regularly work with lawyers, or if your job involves skills like critical reading or research that are relevant to legal practice.

Step 2: Reach out and request a meeting

The next step is to get in contact with your potential recommender. If you haven’t talked to them in a while, begin your email with a quick reminder to jog their memory. Be friendly, direct, and concise.

If possible, it’s best to plan a meeting to discuss your request. However, if this isn’t practical (for example, if you’ve moved far away from your undergrad institution), you can skip this step and head straight to the third.

Email example: Requesting a meeting

Hi Professor Smith!

I hope that everything is going well with you and that you’re still enjoying teaching your seminar on the post World War II international order. I thoroughly enjoyed taking it with you last year as a junior.

I’m currently thinking about what I want to do next year, which will hopefully involve graduate work in political science, and was hoping to meet with you to discuss your thoughts on graduate school. Do you have any time over the next few weeks to meet?



Prevent plagiarism. Run a free check.

Try for free

Step 3: Ask for a letter of recommendation

Make your request during your meeting or, if necessary, via email. Let them know what sort of programs you are applying to and when the deadlines are. Make sure to give your recommenders plenty of time!

Instead of just asking for a recommendation letter, specifically ask if they can write you a strong recommendation. This allows your recommender an “out”—for example, if they don’t feel they know you well enough. A bad or even lukewarm recommendation is the kiss of death for any application, so it’s important to ensure your letters will be positive!

If they say they can’t give you a strong recommendation, don’t panic. This gives you the opportunity to ask someone else who can provide you a better recommendation.

Email example: Requesting a recommendation letter

Hi Professor Jones!

How are you? I hope everything is going well and you’re still teaching Introduction to Labor Economics to eager students!

I’ve been out of school for a year now, working as a full-time research assistant in New York City. Come this fall, I’m hoping to apply to a few programs for graduate school, mostly doctoral programs in Economics.

Since I took two economics classes with you (Introduction to Labor Economics in Spring 2023 and Industrial Organization in Fall 2023), I was hoping that you might agree to serve as a letter writer for my graduate program. I wanted to highlight my work in labor economics, since that’s what I’m hoping to study in graduate school. Also, since I loved your classes, I thought you might be a good person to ask!

The letters of recommendation would be due to each individual program’s website in December. I understand, of course, if you’re too busy this summer or if you don’t feel that you would be the best fit to write a letter. My goal is simply to paint as complete a picture as possible of my undergrad career at Western. If you’d like, we can also discuss this on the phone.

I look forward to hearing back from you!



Step 4: Share your resume and other materials

You should send your resume or CV to your recommenders, along with any other material that might jog their memory or aid in their recommendation.

For instance, you may want to send along your statement of purpose or writing sample if one is requested in your application. Admission committees are looking for a cohesive story that the letters of recommendation, personal statement, and CV work together to tell.




For a research program application, when updating your recommenders, make sure to emphasize any publications or large research projects you’ve completed since you worked together.

For a business school application, ask your recommenders to write on your professional performance and managerial/leadership potential, rather than your academic performance. Business schools often give specific prompts that they want recommenders to answer.

Medical schools and other healthcare programs look for more than academic credentials. Since healthcare professionals are expected to uphold high standards of integrity and professionalism, they also like recommenders to write on applicants’ characters and aptitude to serve the public.

Step 5: Remind your recommenders of upcoming deadlines

Finally, you should send an email to your recommenders a few weeks before the letters are due, reminding them of the deadline and asking if there is anything else you can send them to assist in writing the letter.

If any materials are late, programs will often reject your entire application, so it is imperative that your recommenders get their letters in on time. However, you should also keep in mind that your letter writers are probably quite busy, so don’t send too many reminders!

Email example: Sending a reminder

Dear Professor Jones,

Hope the semester is going well! Thank you again for agreeing to serve as my recommender. I just wanted to send you a quick reminder that recommendations for Program X, Y, and Z are due in two weeks, on December 15. Please let me know if you need anything else from me, and thank you again!



Other interesting articles

If you want to know more about college essays, academic writing, and AI tools, make sure to check out some of our other language articles with explanations, examples, and quizzes.

Frequently asked questions about recommendation letters

Who should I ask for a letter of recommendation?

Choose people who know your work well and can speak to your ability to succeed in the program that you are applying to.

Remember, it is far more important to choose someone who knows you well than someone well-known. You may have taken classes with more prominent professors, but if they haven’t worked closely with you, they probably can’t write you a strong letter.

Can I ask non-professors for a letter of recommendation?

This depends on the program that you are applying for. Generally, for professional programs like business and policy school, you should ask managers who can speak to your future leadership potential and ability to succeed in your chosen career path.

However, in other graduate programs, you should mostly ask your former professors or research supervisors to write your recommendation letters, unless you have worked in a job that corresponds closely with your chosen field (e.g., as a full-time research assistant).

What do I say when asking for a letter of recommendation?

It’s best to ask in person if possible, so first reach out and request a meeting to discuss your graduate school plans.

Let the potential recommender know which programs you’re applying to, and ask if they feel they can provide a strong letter of recommendation. A lukewarm recommendation can be the kiss of death for an application, so make sure your letter writers are enthusiastic about recommending you and your work!

Always remember to remain polite. Your recommenders are doing you a favor by taking the time to write a letter in support of your graduate school goals.

Cite this Scribbr article

Thomas, L. (2023, June 01). How (and Who) to Ask For a Letter of Recommendation. Scribbr. Retrieved July 10, 2023,

Cite this article

The Neuroscientist Who Wants To Upload Humanity To A Computer

The human brain

Everything felt possible at Transhuman Visions 2014, a conference in February billed as a forum for visionaries to “describe our fast-approaching, brilliant, and bizarre future.” Inside an old waterfront military depot in San Francisco’s Fort Mason Center, young entrepreneurs hawked experimental smart drugs and coffee made with a special kind of butter they said provided cognitive enhancements. A woman offered online therapy sessions, and a middle-aged conventioneer wore an electrode array that displayed his brain waves on a monitor as multicolor patterns.

On stage, a speaker with a shaved head and a thick, black beard held forth on DIY sensory augmentation. A group called Science for the Masses, he said, was developing a pill that would soon allow humans to perceive the near-infrared spectrum. He personally had implanted tiny magnets into his outer ears so that he could listen to music converted into vibrations by a magnetic coil attached to his phone.

None of this seemed particularly ambitious, however, compared with the claim soon to follow. In the back of the audience, carefully reviewing his notes, sat Randal Koene, a bespectacled neuroscientist wearing black cargo pants, a black T-shirt showing a brain on a laptop screen, and a pair of black, shiny boots. Koene had come to explain to the assembled crowd how to live forever. ”As a species, we really only inhabit a small sliver of time and space,”Koene said when he took the stage. ”We want a species that can be effective and influential and creative in a much larger sphere.”

Koene’s solution was straightforward: He planned to upload his brain to a computer. By mapping the brain, reducing its activity to computations, and reproducing those computations in code, Koene argued, humans could live indefinitely, emulated by silicon. “When I say emulation, you should think of it, for example, in the same sense as emulating a Macintosh on a PC,” he said. “It’s kind of like platform-independent code.”

The concept of brain emulation has a long, colorful history in science fiction, but it’s also deeply rooted in computer science. An entire subfield known as neural networking is based on the physical architecture and biological rules that underpin neuroscience.

Roughly 85 billion individual neurons make up the human brain, each one connected to as many as 10,000 others via branches called axons and dendrites. Every time a neuron fires, an electrochemical signal jumps from the axon of one neuron to the dendrite of another, across a synapse between them. It’s the sum of those signals that encode information and enable the brain to process input, form associations, and execute commands. Many neuroscientists believe the essence of who we are—our memories, emotions, personalities, predilections, even our consciousness—lies in those patterns.

In the 1940s, neurophysiologist Warren McCulloch and mathematician Walter Pitts suggested a simple way to describe brain activity using math. Regardless of everything happening around it, they noted, a neuron can be in only one of two possible states: active or at rest. Early computer scientists quickly grasped that if they wanted to program a brainlike machine, they could use the basic logic systems of their prototypes—the binary electric switches symbolized by 1s and 0s—to represent the on/off state of individual neurons.

Neuroscientist Randal Koene

A few years later, Canadian psychologist Donald Hebb suggested that memories are nothing more than associations encoded in a network. In the brain, those associations are formed by neurons firing simultaneously or in sequence. For example, if a person sees a face and hears a name at the same time, neurons in both the visual and auditory areas of the brain will fire, causing them to connect. The next time that person sees the face, the neurons encoding the name will also fire, prompting the person to recollect it.

Using these insights, computer engineers have created artificial neural networks capable of forming associations, or learning. Programmers instruct the networks to remember which pieces of data have been linked in the past, and then to predict the likelihood that those two pieces will be linked in the future. Today, such software can perform a variety of complex pattern-recognition tasks, such as detecting credit card purchases that diverge dramatically from a consumer’s past behavior, indicating possible fraud.

Of course, any neuroscientist will tell you that artificial neural networks don’t begin to incorporate the true complexity of the human brain. Researchers have yet to characterize the many ways neurons interact and have yet to grasp how different chemical pathways affect the likelihood that they will fire. There may be rules they don’t yet know exist.

But such networks remain perhaps the strongest illustration of an assumption crucial to the hopes and dreams of Randal Koene: that our identity is nothing more than the behavior of individual neurons and the relationships between them. And that most of the activities of the brain, if technology were capable of recording and analyzing them, can theoretically be reduced to computations.

Koene, the son of a particle physicist, first discovered mind uploading at age 13 when he read the 1956 Arthur C. Clarke classic The City and the Stars. Clarke’s book describes a city one billion years in the future. Its residents live multiple lives and spend the time between them stored in the memory banks of a central computer capable of generating new bodies. “I began to think about our limits,” Koene says. “Ultimately, it is our biology, our brain, that is mortal. But Clarke talks about a future in which people can be constructed and deconstructed, in which people are information.”

It was a vision, Koene decided, worth devoting his life to pursuing. He began by studying physics in college, believing the route to his goal lay in finding ways to reconstitute patterns of individual atoms. By the time he graduated, however, he concluded that all he really needed was a digital brain. So he enrolled in a masters program at Delft University of Technology in the Netherlands, where he focused on neural networks and artificial intelligence.

By mapping the brain, humans could live indefinitely.

By then, many of the other group members had earned their credentials. And in 2007, computational neuroscientist Anders Sandberg, who studies the bioethics of human enhancement at Oxford University, summoned interested experts to Oxford’s Future of Humanity Institute for a two-day workshop. Participants laid out a roadmap of capabilities humans would need to develop in order to successfully emulate a brain: mapping the structure, learning how that structure matches function, and developing the software and hardware to run it.

Not long afterward, Koene left Boston University to become the director of neuroengineering at the Fatronik-Tecnalia Institute in Spain, one of the largest private research organizations in Europe. “I didn’t like the job once I figured out they weren’t into taking any risks and didn’t really care about futuristic things related to whole brain emulation,” Koene says. So, in 2010, he moved to Silicon Valley to take a job as head of analysis at Halcyon Molecular, a nanotechnology company that had raised more than $20 million from PayPal cofounders Peter Thiel and Elon Musk, among others. Though Halcyon’s goal was to develop low-cost, DNA-sequencing tools, its leaders assured Koene he would have time to work on brain emulation, a goal they supported.

“We need to provide a foundation so the new field of brain emulation is taken seriously,” Koene tells me from his bedroom command center. He opens a color-coded chart on one of the screens. It consists of overlapping circles filled with names and affiliations, divided into wedges representing the roadmap’s objectives. Koene points to the outermost circle. “These are the people who just have compatible R&D goals,” he says. Then he indicates the smaller, inner circle. “And these are the people who are onboard.”

The human brain is made up of billions of nerve cells connected by trillions of synapses. Together, they encode information, such as personality and memory. Scientists at Harvard’s Center for Brain Science have developed a technique called “Brainbow” to map this circuitry in exquisite detail. The cerebral cortex, at the top of this image, stores memory and controls conscious activity, such as motor skills and vision. By making a 3-D dataset of high-resolution images, scientists can trace the brain cells to reveal connections. Livet, Weissman, Sanes, and Lichtman/Harvard University

Today, as it happens, every pillar of the brain-uploading roadmap is a highly active area in neuroscience, for an entirely unrelated reason: Understanding the structure and function of the brain could help doctors treat some of our most debilitating diseases.

By following the threadlike extensions of individual nerve cells from frame to frame, Lichtman and his team have gained some interesting insights. “We noticed, for instance, that when an axon bumped into a dendrite and made a synapse, if we followed it along, it made another synapse on the same dendrite,” he says. “Even though there were 80 or 90 other dendrites in there, it seemed to be making a choice. Who expected that? Nobody. It means this thing is not some random mess.”

When he started five years ago, Lichtman says, the technique was so slow it would have taken several centuries to generate images for a cubic millimeter of brain—about one thousandth the size of a mouse brain and a millionth the size of a human one. Now Lichtman can do a cubic millimeter every couple of years. This summer, a new microscope will reduce the timeline to a couple of weeks. An army of such machines, he says, could put an entire human brain within reach.

At the same time, scientists elsewhere are aggressively mapping neural function. Last April, President Obama unveiled the BRAIN Initiative (for Brain Research through Advancing Innovative Neurotechnologies) with an initial $100 million investment that many hope will grow to rival the $3.8 billion poured into decoding the human genome.

Studying how neurons fire in circuits and how those circuits interact, he says, could help demystify diseases such as schizophrenia and autism. It could also reveal far more. Our very identity, Yuste suspects, lies in the traffic of brain activity. “Our identity is no more than that,” he says. “There is no magic inside our skull. It’s just neurons firing.”

To study those electrical impulses, scientists need to record the activity of individual neurons, but they’re limited by the micromachining techniques used to produce today’s technology. In his lab at MIT, neuro-engineer Ed Boyden is developing electrode arrays a hundred times denser than the ones currently in use. At the University of California, Berkeley, meanwhile, a team of scientists has proposed nanoscale particles called neural dust, which they plan to someday embed in the cortex as a wireless brain-machine interface.

Whatever discoveries these researchers make may end up as fodder for another ambitious government initiative: the European Union’s Human Brain Project. Backed by 1.2 billion euros and 130 research institutions, it aims to create a super-computer simulation that incorporates everything currently known about how the human brain works.

There is no magic inside our skull, it’s just neurons firing.

Koene is thrilled with all of these developments. But he’s most excited about a brain-simulation technology already being tested in animals. In 2011, a team from the University of Southern California (USC) and Wake Forest University succeeded in creating the world’s first artificial neural implant—a device capable of producing electrical activity that causes a rat to react as if the signal came from the animal’s own brain. “We’ve been able to uncover the neural code—the actual spatio-temporal firing patterns—for particular objects in the hippocampus,” says Theodore Berger, the USC biomedical engineer who led the effort. “It’s a major breakthrough.”

Scientists believe long-term memory involves neurons in two areas of the hippocampus that convert electrical signals to entirely new sequences, which are then transmitted to other parts of the brain. Berger’s team recorded the incoming and outgoing signals in rats trained to perform a memory task, and then programmed a computer chip to emulate the latter on cue. When they destroyed one of the layers of the rat’s hippocampus, the animals couldn’t perform the task. After being outfitted with the neural implant, they could.

Berger and his team have since replicated the activity of other groups of neurons in the hippocampus and prefrontal cortex of primates. The next step, he says, will be to repeat the experiment with more complex memories and behaviors. To that end, the researchers have begun to adapt the implant for testing in human epilepsy patients who have had surgery to remove areas of the hippocampus involved in seizures.

“Ted Berger’s experiment shows in principle you can take an unknown circuit, analyze it, and make something that can replace what it does,” Koene says. “The entire brain is nothing more than just many, many different individual circuits.”

That afternoon, Koene and I drive to an office park in Petaluma about 30 miles outside of San Francisco. We head into a dimly lit, stucco building decorated with posters that superimpose words like “focus” and “imagination” over photographs of Alpine peaks and tropical sunsets.

Guy Paillet, a snowy-haired former IBM engineer with a thick French accent and a cheerful Santa Claus–like disposition, soon joins us in a conference room. Paillet and his partner had invented a new kind of energy-efficient computer chip based on the physical architecture of the brain—an achievement that had earned them inclusion in Koene’s chart. Koene wanted an update on their progress.

“That’s a very good idea!” Paillet interrupts, before Koene can even finish asking whether he might fabricate their device too.

Many scientists seem to puzzle over a question more fundamental to the brain uploaders’ goal: What’s the point?

As we pull out of the parking lot, Koene is ebullient. I had just witnessed his job at its best. “This is what I do,” he says. “You have got tons of labs and researchers who are motivated by their own personal interests.” The trick, he says, is to identify the goals that could benefit brain uploading and try to push them forward—whether the researchers have asked for the help or not.

Certainly, it seems, many scientists have proven willing to consult and even collaborate with Koene. That was clear last spring, when scientists from institutions as varied as MIT, Harvard University, Duke University, and the University of Southern California descended on New York City’s Lincoln Center to speak at a two-day congress that Koene organized with the Russian mogul Itskov. Called Global Future 2045, the conference’s objective was to explore the requirements and implications of transferring minds into virtual bodies by the year 2045.

Some of those present, however, later distanced themselves from the event’s stated “spiritual and sci-tech” vision. “We were trying to get people with a lot of funding who can do big things to start investing in important questions,” says Jose Carmena, one of the Berkeley neuroscientists working on neural dust. “That doesn’t mean we have the same goal. We have similar goals along the way, like recording from as many neurons as possible. We all want to understand the brain. It just happens that they need to understand the brain so they can upload it to a computer.”

Carmena’s reticence was shared by other researchers, some of whom grew alarmed at even a faint possibility that their opinions about the technical plausibility of brain uploading—however qualified and cautious—might somehow be misinterpreted as an endorsement. “There is a big difference between understanding and building a brain,” Yuste says. “There are many things that we more or less understand but we cannot build.” For example, the brain’s hardware could prove critical, he explained, “or there could be intrinsic stochastic events, like in quantum physics, that could make it impossible to replicate.”

Hayworth, for his part, is now a senior scientist at Howard Hughes Medical Institute’s Janelia Farm Research Campus, a leader in connectomics, where he is developing techniques to precisely image much larger sections of brain than currently possible. He also founded the Brain Preservation Foundation, which has offered a prize for inventing a method that can preserve the brain until emulation technology catches up. “I know this is a controversial topic,” he says, “and there aren’t a heck of a lot of scientific institutes of any type that relish being dragged into it. Hopefully at some point that will change.”

In the meantime, many scientists seem to puzzle over a question more fundamental to the brain uploaders’ goal: What’s the point? Existing indefinitely in the confines of computer code, Lichtman points out, would be a pretty boring life.

Earlier in the day, I had asked Todd Huffman, a member of Strout’s early discussion group, whether the quest really boiled down to achieving immortality. Koene and I had dropped by Huffman’s company, which received venture capital to develop automated brain-slicing and imaging technologies. Huffman was wearing pink toe nail polish on his shoeless feet and sported a thick beard and bleached faux-hawk.

“That’s a very egocentric and individualist way of characterizing it,” he responded. “It’s so that we can look at the thought structures of humans who are alive today, so that we can understand human history and what it is to be human. If we can capture and work with human creativity, drive, and awareness the same way that we work with, you know, pieces of matter,” he said, “we can take what it is to be human, move it to another substrate, and go do things that we can’t do as individual humans. We want as a species to continue our evolution.”

Brain uploading, Koene agreed, was about evolving humanity, leaving behind the confines of a polluted planet and liberating humans to experience things that would be impossible in an organic body. “What would it be like, for instance, to travel really close to the sun?” he wondered. “I got into this because I was interested in exploring not just the world, but eventually the universe. Our current substrates, our biological bodies, have been selected to live in a particular slot in space and time. But if we could get beyond that, we could tackle things we can’t currently even contemplate.”

Mind Transfer Through Sci-Fi History

1929: The World, the Flesh, the Devil, by J.D. Bernal

In a passage that captivates generations of futurists, Bernal predicts mankind will one day leave the body behind and achieve immortality, even replacing the “organic brain cell by a synthetic apparatus.”

1956: The City and the Stars, by Arthur C. Clarke

One billion years from now in the city of Diaspar, a central computer creates new bodies for a rotating group of citizens, storing their minds in its memory banks between lives.

1962: The Creation of the Humanoids

1966: “What Are Little Girls Made of?”,_ Star Trek_

A lovelorn Enterprise nurse beams down to the planet Exo III with Kirk to search for her fiancé. Alas, he turns out to be a mad scientist who transferred himself to an android body after suffering frostbite.

2001:A SPACE ODYSSEY, Keir Dullea, 1968 Courtesy: Everett Colletion

In the film’s finale, mission pilot David Bowman hurtles through space and time until he is transformed into a fetal being enclosed in an orb of light—a reference to mind uploading explained in Arthur C. Clarke’s novel of the same name.



1982: Tron

Not only does an underhanded and mediocre rival rip off videogames designed by the protagonist, he then has the audacity to digitize him into the mainframe using an experimental laser.

1989: “The Schizoid Man,” Star Trek: The Next Generation

1992: Freejack

Mercenary Mick Jagger and henchmen travel back in time to try to snatch Emilio Estevez. A rich guy stored in a future “spiritual switchboard” wants to upload his mind into Estevez’s body and steal his fiancée.

THE 6TH DAY, (aka THE SIXTH DAY), Arnold Schwarzenegger, 2000 Courtesy: Everett Colletion

2000: The 6th Day

In the future, an eye scan copies brain contents for transfer to a cloned body. When Arnold Schwarzenegger returns home to discover his clone has moved in with his family, he recruits him to help blow up the cloning facility.

2004: Battlestar Galactica

Dying in battle isn’t that big a deal to members of the cybernetic civilization called the “Cylons.” They have backup copies of their brains and can simply upload them to new bodies when something goes wrong.

2009:_ Avatar_

A paraplegic soldier uses a device to telepathically control a genetically grown body and spy on a race of 10-foot-tall, blue aliens. The aliens, and ultimately the soldier, upload their memories to the planet’s neural network.

Courtesy: Everett Colletion

2014: Transcendence

Shawn Mikula’s technique stains and preserves an entire mouse brain in plastic resin (inset). The brain can then be imaged (a) in order to reconstruct neural circuits (b)—in this case, tracing a single neuron (green) along with synapses (yellow and orange) and nearby cell bodies (blue).

How to Store a Brain (and Everything in It)

While the first upload of a human brain remains decades—if not centuries— away, proponents believe humanity may be far closer to reaching another key technological milestone: a preservation technique that could store a brain indefinitely without damaging its neurons or the trillions of microscopic connections between them.

“If we could put the brain into a state in which it does not decay, then the second step could be done 100 years later,” says Kenneth Hayworth, a senior scientist at Howard Hughes Medical Institute, “and everyone could experience mind uploading first hand.”

To promote this goal, Hayworth cofounded The Brain Preservation Foundation, a nonprofit that is offering a $106,000 technology prize to the first scientist or team to rise to that challenge. He says the first stage of the competition—the preservation of an entire mouse brain—may be won within the year, an achievement that would excite many mainstream neuroscientists, who want to map the brain’s circuitry to better understand memory and behavior.

Current preservation methods (aside from cryonics, which has never successfully been demonstrated to preserve the brain’s wiring) involve pumping chemicals through the body that can fix proteins and lipids in place. The brain is then removed and immersed in a series of solutions that dehydrate naturally occurring water and replace it with a plastic resin. The resin prevents chemical reactions that cause decay, preserving the brain’s intricate architecture. But in order for all of the chemicals to fully permeate brain tissue, scientists must first slice the organ into sections 100 to 500 microns thick—a process that destroys information stored in connections made along those surfaces.

Shawn Mikula, a researcher at the Max Planck Institute for Medical Research in Heidelberg, Germany, developed a protocol that appears to safeguard all of the brain’s synapses. It preserves the extracellular space in the brain so that the chemicals can diffuse through myriad layers of the whole organ. Then, if the brain is sliced and analyzed at a future date, all of its circuitry will remain visible. Hayworth is currently using electron microscopy to examine the mouse brains sent to him as proof of principle. (In order to win the technology prize, the protocol must also be published in a peer-reviewed journal.) So far, Hayworth says, Mikula’s technique seems effective.

If immortality is defined as brain preservation via plastination, Mikula says, then it’s a reasonable extrapolation of his research results. But as for actually uploading it to a computer: “Who can predict these things? Science is modern-day magic,” Mikula says, “and in the absence of a strong argument against the future feasibility of mind uploading, anything is possible.”

This article originally appeared in the May 2014 issue of Popular Science.

Update the detailed information about Who’S Air Quality Index Shows Life Expectancy In India Down By 4 Years on the website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!