Try not to Miss these Exin TMSTE dump with PDF Questions gives the Latest and 2022 refreshed killexams TMSTE Questions and Answers with Actual TMSTE Test Questions for new subjects of Exin TMSTE Exam. Practice our Real TMSTE Questions Improve your insight and finish your test with High Marks. We ensure your accomplishment in the Test Center, covering every last one of the subjects of the test and improving your Knowledge of the TMSTE test. Pass with a 100 percent guarantee with our right inquiries.

TMSTE TMap Suite Test Engineer answers |

TMSTE answers - TMap Suite Test Engineer Updated: 2024

Free Pass4sure TMSTE braindumps question bank
Exam Code: TMSTE TMap Suite Test Engineer answers January 2024 by team

TMSTE TMap Suite Test Engineer

Exam Details:
- Number of Questions: The TMSTE (TMap Suite Test Engineer) exam typically consists of multiple-choice and scenario-based questions. The exact number of questions may vary.

- Time: Candidates are usually given a specific amount of time to complete the exam. The duration can vary depending on the certification provider and exam format.

Course Outline:
The TMSTE certification focuses on assessing the candidate's knowledge and skills related to the TMap Suite, a comprehensive approach to software testing. The course outline may cover the following key areas:

1. Introduction to TMap Suite:
- Understanding the principles and concepts of TMap Suite
- Exploring the TMap lifecycle and its phases
- Identifying the roles and responsibilities of a TMap test engineer
- Understanding the importance of quality in software testing

2. Test Planning and Preparation:
- Defining test objectives and test strategies
- Identifying and prioritizing test conditions and test cases
- Creating test plans and test schedules
- Identifying and managing test resources and dependencies

3. Test Execution and Reporting:
- Executing test cases and test scripts
- Logging and tracking defects
- Analyzing test results and reporting
- Conducting regression testing and retesting

4. Test Management and Control:
- Managing test progress and test coverage
- Managing test environments and test data
- Managing test risks and issues
- Conducting test evaluations and test closure activities

Exam Objectives:
The objectives of the TMSTE exam typically include:
- Assessing the candidate's understanding of the TMap Suite principles, concepts, and methodologies.
- Evaluating the candidate's ability to plan, prepare, and execute software testing activities using the TMap approach.
- Testing the candidate's knowledge and skills in managing test resources, tracking defects, and analyzing test results.
- Assessing the candidate's ability to effectively communicate and report test progress and findings.

Exam Syllabus:
The specific exam syllabus for the TMSTE exam may include the following topics:

1. TMap Suite Introduction:
- Introduction to TMap Suite and its components
- TMap lifecycle and phases
- Roles and responsibilities in TMap testing

2. Test Planning and Preparation:
- Test objectives and strategies
- Test conditions and test cases
- Test planning and scheduling
- Test resources and dependencies

3. Test Execution and Reporting:
- Test case execution and defect tracking
- Test result analysis and reporting
- Regression testing and retesting
- Test automation and tools

4. Test Management and Control:
- Test progress and coverage management
- Test environment and data management
- Test risk and issue management
- Test evaluation and closure
TMap Suite Test Engineer
Exin Engineer answers

Other Exin exams

ASF EXIN Agile Scrum Foundation
CLOUDF EXIN Cloud Computing Foundation
EX0-115 IT Service Management Foundation based on ISO/IEC 20000
ISFS Information Security Foundation based on ISO/IEC 27002
ISO20KF IT Service Management Foundation Bridge based on ISO/IEC 20000:2018
ITSM20F IT Service Management Foundation based on ISO/IEC20000 (ITSM20F.EN)
MOPF M_o_P (Management of Portfolio Foundation)
MORF M_o_R Foundation Risk Management Certification
MOVF Management of Value Foundation
P3OF Portfolio, Programme and Project Offices Foundation
SCNP-EN SCNP Strategic Infrastructure Security
SCNS-EN SCNS Tactical Perimeter Defense
TMPF TMap NEXT Foundation
TMPTE Map NEXT Test Engineer
SIAMF BCS Service Integration and Management Foundation
TMSTE TMap Suite Test Engineer
ASM EXIN Agile Scrum Master
ITILFND-V4 ITIL 4 Foundation
PDPF Privacy and Data Protection Foundation
Exin-CDCP Exin Certified Data Centre Professional (CDCP)
BLOCKCHAINF EXIN Blockchain Foundation Certification
GRITC Exin Green IT Citizen (GRITC)

Our TMSTE exam prep material gives all of you that you should take TMSTE exam. Our TMSTE TMSTE exam will supply you exam questions with affirmed answers that mirror the real exam. We at guaranteed to empower you to pass your TMSTE exam with high scores. You need to memorize all TMSTE dumps questions we provide at your get section and practice with VCE exam simulator.
TMSTE Braindumps
TMSTE Real Questions
TMSTE Practice Test
TMSTE dumps free
TMap Suite Test Engineer
Question: 27
Usability testing has a lot to do with the quality characteristic user friendliness. What
other quality characteristic relates to usability testing?
A. manageability
B. usability
C. reusability
D. portability
Answer: B
Question: 28
What is the aim of the activity assessing the test basis?
A. determining the risk of the various system parts
B. selecting the most suitable test design techniques
C. establishing whether or not the delivered parts of the testobject can be adequately
D. establishing the testability of the test basis
Answer: D
Question: 29
From a test-economic perspective, the answer to the question what does testing
deliver? uses the following formula:
Test Yield = Test Benefits = Test Costs What is an example of Test Costs?
A. reworking costs
B. project management
C. damage in production
D. testing hours
Answer: D
Question: 30
What is the primary effect of the use of a service virtualization tool?
A. the dependency of the availability of services decreases
B. the test design effort for testing the services decreases
C. the effort for the test execution of the services decreases
Answer: A
For More exams visit
Kill your exam at First Attempt....Guaranteed!

Exin Engineer answers - BingNews Search results Exin Engineer answers - BingNews WBPSC Releases Final Answer Key for Junior Engineer exam 2022

WBPSC Releases Final Answer Key for Junior Engineer exam 2022 | Representative image

The West Bengal Public Service Commission (WBPSC) has announced the release of the final answer key for the Junior Engineer (Civil/Mechanical/Electrical) exam 2022 on December 19. Candidates eligible for the examination can access and get their answer keys from the official website,

The WBPSC JE exam 2022 took place on July 30, 2023. While the exact number of vacancies in various services and posts will be disclosed later, it is specified that all appointments will be provisional initially.

Candidates can follow these steps to get the WBPSC answer key:

Step 1. Visit the official website

Step 2. Navigate to the 'What's New' section on the homepage.

Step 3. Locate and click on the notification for the Junior Engineer (Civil) final answer key.

Step 4. The final answer key for Junior Engineer (Civil) will be displayed on the screen.

Step 5. Verify and get a copy of the answer key.

Step 6. It is advisable to take a printout for future reference.

Tue, 19 Dec 2023 11:51:00 -0600 en text/html
Toy Expert Answers Toy Questions From Twitter | Tech Support

I'm Chris Byrne.

I'm an independent toy analyst and historian.

I'm here today to answer your questions from the internet.

This is Toy Support.

[calm music]

@adii915 says, why are they called teddy bears

like is it short for Theodore or what?

Actually, the teddy bear is named for Theodore Roosevelt

and in 1905, Teddy Roosevelt refused to shoot a baby bear

on a hunting trip.

It was turned into a cartoon.

Morris Michtom of the Ideal Toy Company saw that.

He wrote to Teddy Roosevelt and said,

can I make that teddy bear

and that's how the teddy bear came along.

Now, that's not the only story

because Steiff which is a German plush company

also said that they made teddy bears at the same time

so that's one of those historic conflicts

but at the end of the day, the teddy bear

is still named for Theodore Roosevelt.

@mactagni says, just seen a commercial of a toy that poop.

How did toy makers come up with these dumb ideas?

There is millions of dollars that's been made

with poop toys over the years

whether it's a magic toilet where the poop disappears

or it's a pooping dog.

This is comedy if you're three years old

consistent with the Freudian anal stage

as children are discovering their bodies.

So for example, when Baby Alive first came out,

Baby Alive was designed to replicate a real child.

We'd seen tiny tears that cried.

We'd seen Betsy Wetsy that wet

but suddenly, there was a mechanism

that allowed the doll to have solid food put in

and it came out into the diaper.

So for a child, that was really important

because they were understanding their bodies at the time

but they also got to nurture

in the way that they were being nurtured

in changing the babies.

It's a very short road from that to dogs that poop

to monkeys that poop to kangaroos that poop

and poop is funny.

@Keith_el wants to know,

how different do prototypes usually end up being

from final product?

Usually when people are developing toys,

they start with two different models,

a looks-like model

which is often done today with 3D printing

and a works-like model which is a mechanism.

So when those two things move together,

very often, that's used as the basis for the manufacturing.

So the prototype may go through many different iterations

and revisions as you're getting to the final product

but when you get a final prototype,

that's almost the complete toy.

@keegantindall asks, for ages eight and up.

How do they decide that for toys?

Are they giving it to younger and younger kids

until something really bad happens?

No, there are elaborate testing mechanisms that happen

for kids of all different ages

and because of the way children's bodies develop,

you can have a mechanism

that measures the size of a child's air pipe

and then when it gets to eight and up,

there's also the cognitive development.

There is the is my child intellectually ready for this

and so it's a real good guideline

but definitely for younger kids, you wanna pay attention

to those age gradings for what kids can do

and what's most safe.

@tobytbeck wants to know,

how can you use math to design a toy?

Toy design is almost all engineering

so it's almost all maths

and you need to figure out the tolerances of plastic

and that's chemistry and physics put together.

So you really need to understand

the different levels of math

so that you can engineer a toy effectively

so it'll stand up under pressure,

it'll pass safety regulations and of course it'll be fun.

@The7Legacy says, do kids still play with action figures?

Yes, they do.

Action figures are huge.

This is one of the most iconic action figures of all time.

This is Optimus Prime from Hasbro.

The term action figure was actually coined

when Hasbro introduced G.I. Joe

and one of the things they brought G.I. Joe to market for

was to be a boy's toy that competed with Barbie

which by 1964 was doing pretty well.

So any toy really targeting largely a boy audience

based on a TV show or that is a figure or that's a doll

is called an action figure

but then there's also the concept of the kidult.

These are adults who are still collecting action figures.

They're the ones who are driving toy lines like McFarlane.

They're the ones who are insisting

on much more detail in the Marvel or DC action figures.

So yes, action figures are still a big part

of the toy industry and a big part of people's lives.

@lizzie_bobizzie says,

who is Rubik and why do we have his cube?

In the 1970s, Erno Rubik designed Rubik's Cube

as a way of helping to teach math.

What was unique about it is the mechanism inside

that allows the cube to turn in different directions

on different planes very easily and fluidly.

Rubik's Cube also became

kind of synonymous with intelligence.

A kid who could solve Rubik's Cube and could solve it fast

or could solve it with their feet

was really considered a genius.

It's become a phenomenon within the context of brain puzzles

throughout the years

and there've been many other Rubik's puzzles

that have come through the years.

@Barstool says, how is the pet rock ever a fad?

The thing about a fad is nobody sees it coming.

Nobody can predict a fad.

The thing about a pet rock

was that it came at a specific time,

at a specific point in our culture.

It was the whimsical nature of the manual that came with it

which is you could teach your rock to do tricks.

So it just touched a nerve.

I think Johnny Carson had it on The Tonight Show

and it just became this running gag in the country

and that's how a fad is born.

@hhall31 says, why were Beanie Babies popular?

I don't get it.

I don't get it either but they were.

Beanie Babies were introduced in the 90s

but it was really when they came into the later 90s

and the 2000s that it became collectible

and people really thought

as Ty Warner decided to discontinue

some of the Beanie Babies that they would grow in value

so that Peanut the Blue Elephant would be worth $2,000,

a 5.99 toy worth $2,000.

That doesn't really happen very often

but that didn't stop people from believing it

and Beanie Babies created an entire industry

around collecting them.

We've seen this kind of phenomenon in history.

Probably the biggest one was the tulip craze

in the 17th century when one tulip bulb would be worth

what a family could make in a year.

So we really do see this

as happening within human civilization from time to time.

You get a craze, we don't know why it happens, it takes off

and then it dies.

@witheredBBfilms says, I will never understand

how Transformer toys are designed

especially in such high numbers.

The really amazing thing about Transformers,

I'm gonna bring my friend Optimus back in here for this,

these were originally made by Takara

and it was all about robots transforming.

It was a very Japanese concept.

It's really an engineering tour de force

when they do all of these,

the different plastic pieces that fit together and fold up.

This one particularly is done with a process

called injection molding.

So you have two halves of a mold

and literally what happens is when it's in the machine,

liquid plastic is shot into the mold

and it takes on the shapes you see here.

That's why precise molding is really important

and molds are very expensive.

Molds also have a lifecycle

so they have to keep redoing them.

So it really is a way of getting the plastic into a shape

but with Transformers, it's doubly hard

because not only do you get it into the shape

but it has to move in a specific way

and there's not a lot of room for mistakes in this.

@chloesullivanPR says, imagine what kids' toys will be like

in 50 years time.

I don't think it's too difficult to imagine that

because children don't change that much.

As a species, we don't evolve quite that fast.

The toys will reflect things

that are going on in the culture

but will still need the basics

for developing eye hand coordination,

developing gross motor skills, interacting socially.

What's changed toys really in the last 40 years

has been the evolution of chips.

So for example, the original Furby had the same chip in it

that powered the Apple II computer.

As chips get more and more sophisticated,

you're going to see the toys able to do more

and the big question in 2023 is

how is AI going to impact toy design?

@suebanks64 says, whenever I watch Disney

and see commercials for toys,

I think all of these toys are plastic.

How are toy companies creating more sustainable toys

or at least recyclable toys?

The most sustainable toy

is the one that never gets thrown out.

Something like LEGO is completely sustainable

because it gets passed down from kid to kid.

However, there are a lot of experiments going on

with toys made from sustainable materials.

There's a lot of wood coming in toys

and then because plastic is endemic to toys,

you're also seeing things like less packaging,

companies reducing the lights in their factories.

So it's highly, highly considered in toy companies right now

and they're looking to see what the next best practice is.

@lennartnout says, also, how does LEGO do quality control?

How is every single piece perfect

and how do you never miss a piece from the set?

It's amazing.

The thing about LEGO is that they have an entire company

dedicated to just that, to quality control,

to the kind of ABS plastic that they use,

to the way in which the pieces fit together

and stay together

to the way in which things are designed.

They have an entire model shop that tries these things out

and it really is their brand identity.

@pawzcore says, I'm so interested in recalled toys.

It's so interesting like I honestly kind of want

that one rollerblade Barbie from 1993

where her skates literally have lighters in them

so they spark when she moves.

You might want that and you might be able to find it

but it probably wasn't safe.

Toy recalls are actually rare

because toys go through such comprehensive testing

before they ever go to market

but every once in a while, something comes through.

There's a part that's not right

or there's different things that happen to it.

When I was working with CBS Toys,

we had a Creative Playthings wooden gym.

The steps on the ladder were too close together

so if a child stuck their head in it,

we never thought they would, it could actually get stuck.

So you recall that so you can fix that part

and then reissue the toy.

@ChillCandace says, the real question

is how did Hasbro make so much money off a Potato Head?

That's crazy.

Who would've thought a potato would be a classic toy?


It was one of the first toys to be advertised on television.

Probably nobody in 1952 when it first came out.

The original Mr. Potato Head just had the body

and on top of the body was a spike

and that spike you used to impale a real potato on

and then you decorated it with eyes, nose, mouth, pipe.

It was something that if Captain Kangaroo had it on,

it was gonna be a phenomenon because at the time,

almost every kid in America was watching Captain Kangaroo

but it was really something

that caught the imagination of kids.

@bludragonz11 says, I remember

but how many remember the Cabbage Patch Doll chaos of 1983?

Cabbage Patch Dolls in 1983 became so popular

because they really made that amazing transition

that happens every once in a while

when you go from a toy designed for kids

to a cultural phenomenon.

By Xavier Roberts, they were original Appalachian artworks

and they were kind of inspired by Apple dolls

and then when Coleco got them,

they made them with plastic faces.

One of the things that made Cabbage Patch Dolls unique

was that you didn't own it.

You adopted it and each doll came with adoption papers

that you had to fill out and send back into the toy company.

It was the scarcity and the fact that trying to find them

that really drove the phenomenon.

So having it became as important as playing with it.

@jhamlett wants to know,

has any business ever taken advantage

of rights licensing across every genre

more effectively than Funko Pop?

It's startling just how deep they go.

That's absolutely true.

For somebody who might not be familiar with Funko Pop,

they really came out of the whole urban vinyl movement

which started in Japan and it was all about collectibles

based on either anime figures or entertainment figures.

Funko Pop has a very unique look.

It's got the larger head, the smaller body

and when you look at it, you know that's a Funko Pop.

That made it perfect for licensing.

So you can interpret Mickey Mouse,

you can interpret Harry Potter,

you can interpret pretty much anything, Cher if you want to

and the one thing it has in common

is everybody knows it's Funko.

@LilyBaileyUK says, tell me how old you are

by telling me your favorite toy fad.

Well, it's right here, it's Mr. Machine.

In 1960, this transformed the toy industry.

You could ostensibly take apart and put back together.

You wound him up.

He was so popular, every kid had to have him.

The whole appeal of taking apart Mr. Machine

was that robots were very popular during the 50s

and in popular entertainment.

The problem was kids would get it apart,

they couldn't get it back together and guess what?

It didn't come with instructions to put it back together.

@dnjmsprry says, I wonder how toy collectors decide

what to keep in box and what to unbox.

If you are hoping that your toy

is gonna appreciate in value,

you're gonna leave it in the box

and you're gonna have to take care of that box too

because the state of the box

does contribute to the value of the toy.

Every once in a while, you'll come across

a mint in box 1959 Barbie

and that's worth about $6,000 depending on where you get it

but then I always ask, what happened to the child?

@TerryBradshaw says, is there anyone here

that collects McDonald's toys from the Happy Meal?

I'm sure it's worth something.

Collecting Happy Meal toys or any fast food toys

is a huge subset of the toy industry.

You will find conventions, you will find collectors,

you will find websites.

They really do have a value within that community.

I have two versions here of Woody

that came with the Burger King meals

in 1995 when it came out and it was a phenomenon.

These are still very rare and highly collectible

and I'm not gonna open 'em

because that would diminish their value.

@jaaronmerchant wants to know,

how does one get into the toy industry,

specifically toy design, asking for a friend.

I love this question

because one of the things the toy industry needs desperately

is more designers.

So there are two great programs,

one at Fashion Institute of Technology in New York

and then one at Otis College of Design in Los Angeles.

Now, you don't necessarily need to go to college

to be able to do this.

A lot of people have industrial design backgrounds,

a lot of people have animation backgrounds

and really, it's about team

and bringing your unique skills to the table.

That's all the questions.

Hope you learned something fun about toys.

Until next time.

Mon, 11 Dec 2023 10:00:00 -0600 en-US text/html
Doctor Answers Blood Questions From Twitter

I'm Dr. Sanjay Juneja,

hematologist and medical oncologist.

I'm here to answer your questions from the internet.

This is Blood Support.

[upbeat music]

First up, What happens if person gets injected

with the wrong blood type?

It's gonna be really bad

and that's because it recognizes that protein as foreign,

as a virus, and it just goes rogue.

It's just like destroy, destroy, destroy, destroy, destroy.

Generally, we treat it with immediate steroids

and may need to consult some very fancy people

to try to churn the blood and get it right.

We have a question from @paydun.

Do we even know why we have different types of blood...

like really?

And the answer is we don't.

There's basically eight different basic ones

that are different variations.

You can have A, B, O or some combination.

And when we're saying those letters,

it basically reflects an antigen or a protein

on the surface of the red blood cells.

But we do know that certain areas

have more blood types than others.

One of the main reasons

is because of infections and viruses.

For example, malaria likes to get into red blood cells

and basically cause you all that sickness if you are type A.

So you'll see more type O in those areas

because they were less likely to die.

Next question from Ryan Rodriguez.

Could blood actually spray and/or jettison

from the human body as is sometimes seen

in movies, shows, and games?

The answer is yes.

If you cut an artery, that pressure is higher.

An artery is your oxygenated blood.

It's the stuff that just got the oxygen from the lungs

and is being pumped out.

It's the top number on your blood pressure,

and it carries oxygen and all the fuel and the stuff

that all your cells need to the tissue.

But when you're drawing blood

and the stuff that's the veins,

it shouldn't be at the pressure to be shooting out

at somebody.

Veins are a passive, more limpy process

because it's the bottom number of millimeters of mercury

on your blood pressure,

and it's the blood that is now deoxygenated.

It's done its job and the red blood cells are coming back

to get more oxygen into your lungs.

@Garrett_Sandlin, What the fuck is a hematologist...


A very nerdy person usually that studies blood.

When we think of blood, you think of the the red stuff

and it carries oxygen, but it's actually far more than that.

It's constituted predominantly by three things.

Red blood cells, right? They carry oxygen.

They're like, Ooh, gimme oxygen, lungs

and then they go deliver it to the rest of your body.

But the other parts are white blood cells,

the antibody stuff that you hear about vaccines

that live for years.

The third part is platelets. Platelets help you clot.

They're the reason that when you get a cut,

it actually closes up.

Those are generally the three parts

and then the last one is what's called plasma.

And plasma is this beautiful harmony,

balance of taking care of things when it comes to clotting

and should you clot more, should you clot less

and fight infections.

Point being, there's a reason

that you have to do all of internal medicine for three years

and then an additional one or two years in hematology

just to understand how complicated this is.

Next question by Dr.Moxie Coms open,

What exactly is a hemorrhage?

A hemorrhage is a general term to just mean bleeding.

So you could be hemorrhaging, technically,

from your arm from a cut,

but we usually use it in a colloquial sense to mean a lot.

@spiidyy, What's the difference between red blood cells

and white blood cells?

Red blood cells are pretty much all the same thing

in your body.

They carry oxygen.

Oxygen is the fuel on every cell in your body.

White blood cells have all different kinds.

There's neutrophils and lymphocytes.

Neutrophils are stuff that go up with bacterial infection

and fight things when you inhale them

all the time walking around.

And then you have the lymphocytes,

which is all that antibody stuff.

Antibodies are smarter when it comes

to attacking pathogens or viruses and stuff.

That's why you get vaccines when you're young.

They're already prepped and ready.

They're hardwired because you gave them the wanted poster.

They're kind of smarter marines

and that's the whole concept behind vaccines

as well as just getting an infection.

Posted by ChampionK448, How do scabs work?

Scabs are basically the evidence of the clotting process.

The moment the endothelium,

that is the inner lining of the red blood vessel,

when it feels injury, it sends a whole cascade of markers

and signals and say, Yo, I'm like, I'm cut here.

And then the platelets, which are the things that float

in your bloodstream, they're actually yellow,

which is why they're found in plasma.

They basically are the endothelium,

the inside lining of the blood vessel floating around.

And then they go to that point of the injury

and then they get out these little tentacles

and they latch on and that's the platelet plug

and they start sending out a lot of signals.

Yo, yo, come. We need to regenerate this.

So when that platelet formation starts,

then you start getting a blood clot after that.

That is the immediate process that's happening

to quickly restore that as fast as possible.

Next question from @MugishaElliot, My doctor friends,

what causes a blood clot and how does it kill somebody?

A blood clot is actually a complicated downstream series

of events to make something that's hard

and put together, that's clotting.

We need to be able to clot, right, all the time.

I mean if you get a cut, you need to be able to clot it.

However, you can have a clot that comes out randomly.

This is an example of what happens if you get a blood clot.

Especially if this blood vessel was in your lungs,

a pulmonary embolism,

and you blocked the blood flow entirely

to be able to get oxygenated,

you're in big trouble.

Next question, @McCoyee,

What happens when you centrifuge blood?

We are basically spinning down the blood.

The blood has multiple components.

It has red blood cells that carry oxygen, it has plasma,

it has white blood cells that fight infection,

and it has those platelets.

So today, we'll be using an example,

back like science class, of oil, vinegar, and dye.

When you take that uniform solution

and you want to isolate your plasma,

if you put into the machine

and then the centripetal force forces the densest thing

to the outside, that's why it's angled.

And now you have a nice separation.

You have what looks like the plasma at the top,

which is clear, and that's the stuff

that if you get platelets

or you're getting the plasma stuff in a hospital,

you'll notice that the bag is yellow

and it's different than when you're getting red blood cells.

Next question, @ihatetrace,

Hey Siri, there's still blood in my urine.

How much is too much?

If you were seeing blood in your urine,

that's too much.

Even having red blood cells that you can't see

called microscopic hematuria

Any much is too much if you can see it.

@jacquesderridad, Is blood really blue in the body?

The blue concept is inaccurate.

The veins appear blue, which are the lining

or the piping that has the blood in it.

@uMthonos, How does the blood pressure arm thing work?

What a blood pressure cuff is doing

is assessing what pressure is your heart pumping at.

The left side of your heart is pumping blood

all throughout your body.

You continue to elevate the pressure

until the pressure in the cuff

is more than the pressure of your heart

pushing the blood to the end of your fingers,

and then you start to come down.

So you're at 180, 170, 160, 150, 140,

and then all of a sudden you hear that top number

of that strong left ventricle in your heart

able to start pushing.

That's when you know your top number.

And eventually, the beat fades away. Why?

Because the bottom number is the passive way that our blood

after it is done and given the oxygen to everything,

it needs to come back to the heart to get more oxygen.

By Deft with many Ts, Why do people faint

when they see blood?

The reason that we pass out in general

is because if our blood vessels dilate,

which is what's happening

when you see something that's scary,

and your brain is not receiving enough blood anymore.

So the theory is, through evolution,

our bodies learned that I'm having trouble getting blood

with oxygen and fuel to the most important organ in my body,

the least I can do is completely collapse

so I don't have to combat Newton's gravitational law

like downward.

Next question by Makala Phillips @98kala,

Every time I'm on my period,

I'm so confused how girls can bleed so much and still live.

Your bone marrow is constantly making blood.

And so if you were to check a reticulocyte,

which is a baby blood vessel, it's cute,

it's actually bigger than normal,

you'll see that during and after your cycle,

that reticulocyte count should be up.

That's showing that your bone marrow is spitting out

and trying to accommodate for the circumstances.

Because your red blood cells all only live

about 90 to 120 days so it's always doing it

and it knows when you need more.

@sirjeff32, Can someone die from high blood pressure?

Like this is a real question.

The answer is yes. Two times over.

You can die from acutely high blood pressure,

which means at that moment.

Too much, you know, Adderall, cocaine, methamphetamines

because they raise your blood pressure.

Now the blood pressure itself will not, you know,

kill someone,

but those blood vessels that are taking

that insult from the heart, it's 200, it's 220,

then if it cracks and pops open

now you're bleeding into your brain or anywhere else

and that can cause, obviously, immediate deaths.

But then over time, the reason people are so obsessive

about it is because that injury

from that flow of that blood pressure

over and over and over eventually start causing

this kind of stuff, plaque burdens and a little injury,

and then now this coronary artery disease

or cardiovascular disease is starting to clog up everything.

And remember the way that your heart functions,

your brain functions, your kidneys function,

all the things that are called vital organs

start losing the amount of circulating blood

and oxygen they need to operate correctly.

Next question from @MarkBski,

How often are new blood cells made

and what are they made from?

They're all made, for the most part, in the same place.

They are made in the same place and that's your bone marrow.

When you break a chicken bone, you see that dark,

you know, ruddy stuff.

It's in the bone, within the bone itself,

and they're basically prompted by stem cells

and your stem cells say, Hey, I need some more.

Your red blood cells live about 90 to 120 days.

So when they're getting kind of recycled in this spleen

and old, you make new ones from the bone marrow.

This next question from Ari, Weird question,

does anyone know if low white blood cell count

is ever serious?

If it deviates from your baseline

and you see this massive drop,

then that, generally, is thought to be something

to investigate or be serious.

But what's crazy is in Black Americans

and as well as some other minority groups,

ranges we have for normal are actually based

for the most part on Caucasian Americans

from like the 1980s.

They took 10,000 healthy people and were like

You look really healthy, let's see what your number is

and said, This is what we consider normal.

So up to one out of three to one out of five Black Americans

will actually repeatedly look low

on the white blood cell count,

but more specifically on the neutrophils.

That's something to be said about ranges.

So if you're ethnic and you're using the range

of someone in America of what our cutoffs are,

realize that we only know normal by sampling people

and saying You look healthy

and this is what we think is normal.

@Enoch_Nyanz, What are the signs of leukemia?

Leukemia is when something has gone very wrong,

very, very wrong in your bone marrow.

Your bone marrow has like a hierarchy and pecking order.

You have just the number of cells that you need

to differentiate and mature and grow.

When leukemia happens is it's not regulated anymore.

Cancer means unregulated cell growth.

If you have acute leukemia, market exhaustion,

market fatigue, your blood count is like super low,

and you may be losing weight,

and you're exquisitely sensitive to getting infections.

And oftentimes, you'll notice you start getting, you know,

spots of little pinpoint red dots out of nowhere

because your platelets are so low.

@Jinnah2Imran, Why do mosquitoes prefer some people,

but wouldn't bite others?

One of the main things that they're theorizing

is if you have more CO2,

which is bicarbonate, in your blood,

apparently mosquitoes can kind of almost sense

in the blood that has more bicarbonate.

Also, depending on what type you have, supposedly,

that may kind of trigger or have a predilection

on what mosquitoes favor.

But the truth is it's quite complicated

and we all wish that Noah

had just slapped those two mosquitoes.

Well, that's all the questions.

I really hope you learned something

and appreciate the blood system more.

We'll see you next time.

Wed, 11 Oct 2023 12:00:00 -0500 en-US text/html
JKSSB Junior Environment Engineer 2023 results and final answer key released at No result found, try new keyword!Jammu and Kashmir Services Selection Board has declared the results and the final answer key for the post of Junior Environment Engineer today, November 19. Candidates who took the examination can ... Sat, 18 Nov 2023 23:03:00 -0600 en-us text/html Popular Science Podcasts

Popular Science podcasts unearth the universe’s strangest histories. They answer your most mind-burning questions. They tell spin yarns about scam artists and celebrity diets and video games and poodle haircuts. PopSci’s shows are simply the best science and tech podcasting: shamelessly entertaining, painfully smart, and fiercely fascinating.

Looking for a visual we mentioned on Weirdest Thing? Every single one of our episodes has a corresponding write-up published on the same day—that’s where you’ll find photos, videos, cited sources, and more. Check them out here!


Mon, 14 Oct 2019 03:23:00 -0500 en-US text/html
JKSSB Jr Engineer 2022: Provisional answer key released on JKSSB Junior Engineer 2022 Provisional Answer Key: The Jammu and Kashmir Services Selection Board (JKSSB), under the leadership of Controller of Examinations Reyaz Ahmed Malik, has released the provisional answer key for the Written Examination conducted on November 19, 2023, for the post of Junior Engineer (Civil), UT Cadre, Public Works (R&B) Department.
Candidates can access the Provisional Answer Key, covering Series A, B, C, and D, at Annexure A on the official website ( This step ensures transparency and allows candidates to assess their performance in the recently held examination.
As per the official statement, candidates who find discrepancies in the answer key have the opportunity to raise objections. To do so, candidates must submit their objections or representations using the prescribed format/proforma provided in this article. Alongside the objection, candidates are required to submit hard copies of documentary proof/evidence supporting their claim. The submission must also include a fee of Rs. 200 per question, payable in the form of a demand draft favouring the Accounts Officer, J&K Services Selection Board. This fee is refundable in the case of genuine and correct objections.
Candidates can submit their objections in offline mode at the JKSSB offices located at CPO Chowk, Panjtirthi, Jammu, and Zamzam Building, Rambagh, Srinagar, during office hours.
It is important to note that the Board will not consider any objections or representations submitted after the stipulated period or through any other mode.
Upon careful consideration of the objections received, the Answer Key will be revised if necessary, benefiting all candidates. While the Board will not provide individual responses regarding the acceptance or non-acceptance of objections, this process ensures fairness and accuracy in the evaluation of the examination.
This move by JKSSB underscores the commitment to a transparent and accountable recruitment process, maintaining the integrity of the examination system in Jammu and Kashmir. Candidates are encouraged to review the provisional answer key and, if necessary, submit their objections within the specified timeframe.
Direct link to check the JKSSB Jr Eng Provisional Answer Key
Here are six steps on how to get the JKSSB Junior Engineer 2022 Provisional Answer Key:
Step 1: Open your web browser and go to the official JKSSB website at
Step 2: Look for a section on the homepage that is dedicated to Latest Updates or Examination Notifications.
Step 3: Within the latest updates section, find the specific link related to the Junior Engineer 2022 Provisional Answer Key. This link may be titled accordingly, mentioning the Provisional Answer Key or a similar phrase.
Step 4: Once you've located the relevant link, click on it to access the page containing the provisional answer key.
Step 5: Select the Series (A, B, C, D) - the provisional answer key might be available for different series (A, B, C, D). Choose the series corresponding to your question paper to view the correct key.
Step 6: After selecting the series, you should be able to view or get the provisional answer key. Click on the get option, and save the file to your device for future reference.
Remember to check the official instructions provided on the website for any specific details or updates related to downloading the JKSSB Junior Engineer 2022 Provisional Answer Key. Mon, 27 Nov 2023 00:14:00 -0600 en text/html Prompt Engineering Boosted Via Are-You-Sure AI Self-Reflective Self-Improvement Techniques That Greatly Improve Generative AI Answers

Aristotle famously said that knowing yourself is the beginning of all wisdom.

The notion that self-reflection can lead to self-improvement is certainly longstanding, typified best by the all-time classic saying know thyself. Some would suggest that knowing yourself encompasses a wide variety of possibilities. There are the knowing aspects of what you know and the knowledge that you embody. Another possibility is to know your limits. Yet another is to know your faults. And so on.

In modern times, we seem to have a resurgence of these precepts. There are online classes and social media clamors that urge you to learn how to do self-reflection, self-observation, exercise reflective awareness, undertake insightful introspection, perform self-assessment, etc. Each day you undoubtedly encounter someone or something telling you to look inward and proffering stout promises that doing so will produce great personal growth.

Interestingly and importantly, this same clarion call has come to generative AI.

In today’s column, I am continuing my ongoing special series on the latest advances in prompt engineering, including the ardent and increasingly popular technique of using prompts that aim to get generative AI to be self-reflective and hopefully self-improve. I want to right away make it abundantly clear that doing so has nothing to do with AI garnering sentience (i.e., AI doesn’t have sentience and please don’t believe those banner headlines that try to scheme you into believing otherwise).

Allow me a moment to explain.

You can enter a prompt into generative AI that tells the AI app to essentially be (in a manner of speaking) self-reflective by having the AI double-check whatever generative result it has pending or that it has recently produced. The AI will revisit whatever the internal mathematical and computational pattern matching is or has done, trying to assess whether other alternatives exist and often doing a comparison to subsequently derived alternatives.

This is simply a mechanization of sorts and not a sign of sentience.

Let’s see how this works.

I might decide to ask generative AI about the tale of Jack and the Beanstalk and whether Jack traded a cow or a pig for his magic beans (the correct answer is that he traded a cow). Suppose the generative AI produces an answer that says Jack traded a cow for the magical beans. I might not know whether this is the correct answer or not, perhaps I’ve forgotten the details of the legendary tale, or maybe never heard or read it, to begin with.

Rather than merely accepting the generative AI answer as necessarily true, I opt to ask the generative AI to double-check the answer that it has given to me. The mathematical and computational pattern matching will go through another cycle and likely end up comparing the initial answer to a newly derived answer. Assuming that both answers match, the AI app will presumably indicate to me that indeed the correct answer is that the animal was a cow.

Voila, we have done a double-check as initiated via my prompting. The double-check was performed by the very same AI app that generated the initial answer. We will momentarily consider how this has some downsides.

Keep going for now.

In the AI field, this double-checking is customarily said to be a kind of self-reflective computational action.

Does that seem to be a reasonable way to construe what just happened?

Well, fair and square, you might get quite a queasy bit of heartburn about referring to this as self-reflection. Humans are said to be self-reflective, though we don’t yet know exactly how our brains and our minds do this, nonetheless, we seem to be able to perform such mental tasks. The rub about saying that AI is self-reflective is that it tends to anthropomorphize AI. We begin to associate today’s AI as being sentient because we are using a word or phrase that normally is associated only with sentient beings.

I certainly concur that there is a danger in asserting that generative AI is somehow self-reflective since the wording alludes to a formulation of sentience. Trying to come up with other words or phrases for this capacity is somewhat difficult. Any made-up terminology is going to be hard for people to understand such as if we were to say that the AI is able to (made-up) zippity doo dah as a means of double-checking an answer that it has derived. You wouldn’t at an immediate glance know what was being discussed.

So, for the time being, let’s proceed with suggesting that AI is “self-reflective” when we get it to double-check its answers. Please do your best to construe that phrasing in a non-sentient way. I hope you can keep things straight in your mind that we have self-reflection amongst humans, and we have something else that we are labeling as self-reflection amid generative AI which is a computational and mathematical operation.

Thanks for playing along.

Next, let’s revisit the question of whether Jack traded a cow or a pig for those miraculous beans. Imagine that the generative AI produced an initial answer that said the trade consisted of giving up a pig for the beans. I might henceforth believe that a pig was the trading element and not a cow. You and I know that according to the official version of Jack and the Beanstalk, it was absolutely a cow that was traded.

I will anyway opt to double-check the answer that the generative AI has provided. I do so not especially since I doubt that the pig was the traded animal, and instead merely as a means to get the AI to take another stab at the answer. I might do this all of the time. No matter what answer the AI gives, I decide that I will always ask the AI app to do a double-check. You might say that I am a skeptic at heart and ergo have a tendency to demand a second look.

Pretend that after the double-check, the generative AI indicates that the proper answer is that Jack traded a cow. Maybe the AI even fesses up that the first answer was wrong and that the AI is essentially admitting that it made a mistake. At this juncture, I might of course be confused and concerned. Is this second answer indeed the correct answer? Maybe the first answer was correct, while perhaps this second answer is the incorrect one.

You can indubitably see my qualm on this turn of events.

I become agonized over this and manage to find a copy of Jack and the Beanstalk. Aha, the correct answer is that Jack traded a cow. I proudly go into my generative AI app and tell the AI that the proper answer is a cow. The pig was not the correct answer. Furthermore, I instruct the generative AI that forever more the AI app is to always state that the answer is that of a cow.

You could declare that I have improved the generative AI. I found out that the correct answer is a cow and I have instructed the AI app to state that a cow is the answer. No more of this misleading and false answering about a pig. The cow is the winner-winner chicken dinner.

Let’s make a small twist to this. Instead of my telling the generative AI to go ahead and always refer to a cow as the traded animal, suppose that the AI app opts to mathematically and computationally do so without my having to instruct it to do so. The double-checking led to the AI deriving that the answer was a cow. The AI could also update the internal structures to always mention a cow instead of a pig as the proper answer.

Generative AI has been said to self-improve.

You can probably guess that referring to AI as being able to self-improve will generate as much heartburn as saying that AI is self-reflective. We are once again using a word or phrasing that normally refers to sentient beings. Humans self-improve. It is fair for us to suggest that AI self-improves? The anthropomorphizing issue raises its head and we need to be cautious accordingly.

We have covered enough at this point to do a handy recap of the two distinct considerations at play here:

  • AI self-reflection: Generative AI can be prompted to do a double-check that we will refer to as having the AI be self-reflective (which is computationally-oriented, and we won’t think of this as akin to sentience).
  • AI self-improvement: Generative AI can be prompted to do a double-check and subsequently adjust or update its internal structures as a result of the double-check, which we will refer to as AI self-improving (which is computationally-oriented, and we won’t think of this as akin to sentience).

I trust that you can discern that there aren’t any magic beans underlying the act of AI being self-reflective or self-improving. The entire confabulation is a computational and mathematical undertaking. Lots of numbers and those pesky 1s and 0’s sit at the heart of this.

One additional quick comment to bring to your attention.

You can have self-reflection without also garnering self-improvement.

Here’s what I mean.

Going back to my rendition of the Beanstalk scenario when using generative AI, the AI app might have given me the pig answer if I later opted to ask the question again. Even though my double-checking seems to have gotten the AI to respond that the answer ought to be a cow, the AI app wouldn’t necessarily update or adjust to provide the cow answer henceforth. Things could be that no semblance of self-improvement occurs. The first answer by the AI is always going to be the pig. This is because no self-improvement or adjustment was triggered, either by me or within the AI app.

I bring this up to emphasize that there isn’t an ironclad twofer involved. You can have an AI be considered self-reflective that doesn’t also have to be self-improving. They are two distinct operations. Make sure to realize that those are not always bound to each other.

You might be puzzled on this point.

Wouldn’t we always want self-improvement to be an automatic consequence of self-reflection?

Nope, we wouldn’t.

Follow me on this. The first answer is the pig. Suppose that after double-checking (being so-called reflective), the AI generates an answer that the correct answer was a horse. Yikes! The double-checking at least overcame the pig, but now it has landed us onto a horse. The cow is nowhere to be seen.

If we had automatically forced the generative AI to self-improve or adjust based on the double-check, we would henceforth have the horse as the answer. Admittedly, we are not worse off, apparently, since both the answer of the pig and the answer of the horse are wrong. It is hard to say which is more wrong than the other.

We’ve now got some additional rules of thumb for this saga:

  • AI self-reflection can be wrong. There is no ensure that just because a double-check is undertaken that for sure the right answer will be produced. Maybe so, maybe not.
  • AI self-improvement can be wrong. There is no ensure that self-improvement will be correct. A possibility exists that the adjustment or updating will instill incorrect answers rather than correct answers.

All in all, as I repeatedly say in my many presentations and workshops, generative AI is like a box of chocolate, namely you never know what you might get. Be wary of falling into the trap of believing generative AI.

The way in which generative AI has been devised by the AI makers is such that the generated wording appears to be entirely confident and seemingly always right. You might find of interest my accurate coverage of two attorneys who fell for this wording by believing generative AI that made up various court cases (considered a form of AI hallucination). Regrettably, the same two attorneys asked the generative AI to double-check, and the AI app indicated that the court cases were fully valid and real. They got hit by a double-whammy and ended up in hot water, see my analysis and coverage at the link here and the link here.

You’ve now been introduced to the grand power of AI self-reflection and AI self-improvement, something that you can readily invoke in generative AI via your prompting approaches. I would strongly recommend that anyone of any prompt engineering prowess should well-know how to leverage the AI self-reflection and self-improvement capacities. This is a must. That being said, you might be excited to know that we have a lot more to cover on the course of AI self-reflection and AI self-improvement. I’ve only scratched the surface so far herein.

Before I dive into my in-depth exploration, let’s make sure we are all on the same page when it comes to the keystones of prompt engineering and generative AI. Doing so will put us all on an even keel.

Prompt Engineering Is A Cornerstone For Generative AI

As a quick backgrounder, prompt engineering or also referred to as prompt design is a rapidly evolving realm and is vital to effectively and efficiently using generative AI or the use of large language models (LLMs). Anyone using generative AI such as the widely and wildly popular ChatGPT by AI maker OpenAI, or akin AI such as GPT-4 (OpenAI), Bard (Google), Claude 2 (Anthropic), etc. ought to be paying close attention to the latest innovations for crafting viable and pragmatic prompts.

For those of you interested in prompt engineering or prompt design, I’ve been doing an ongoing series of insightful looks at the latest in this expanding and evolving realm, including this coverage:

  • (1) Practical use of imperfect prompts toward devising superb prompts (see the link here).
  • (2) Use of persistent context or custom instructions for prompt priming (see the link here).
  • (3) Leveraging multi-personas in generative AI via shrewd prompting (see the link here).
  • (4) Advent of using prompts to invoke chain-of-thought reasoning (see the link here).
  • (5) Use of prompt engineering for domain savviness via in-model learning and vector databases (see the link here).
  • (6) Augmenting the use of chain-of-thought by leveraging factored decomposition (see the link here).
  • (7) Making use of the newly emerging skeleton-of-thought approach for prompt engineering (see the link here).
  • (8) Determining when to best use the show-me versus tell-me prompting strategy (see the link here).
  • (9) Gradual emergence of the mega-personas approach that entails scaling up the multi-personas to new heights (see the link here).
  • (10) Discovering the hidden role of certainty and uncertainty within generative AI and using advanced prompt engineering techniques accordingly (see the link here).
  • (11) Vagueness is often shunned when using generative AI but it turns out that vagueness is a useful prompt engineering tool (see the link here).
  • (12) Prompt engineering frameworks or catalogs can really boost your prompting skills and especially bring you up to speed on the best prompt patterns to utilize (see the link here).
  • (13) Flipper interaction is a crucial prompt engineering technique that everyone should know (see the link here).
  • (14) Additional coverage including the use of macros and the astute use of end-goal planning when using generative AI (see the link here).

Anyone stridently interested in prompt engineering and improving their results when using generative AI ought to be familiar with those notable techniques.

Moving on, here’s a bold statement that pretty much has become a veritable golden rule these days:

  • The use of generative AI can altogether succeed or fail based on the prompt that you enter.

If you provide a prompt that is poorly composed, the odds are that the generative AI will wander all over the map and you won’t get anything demonstrative related to your inquiry. Being demonstrably specific can be advantageous, but even that can confound or otherwise fail to get you the results you are seeking. A wide variety of cheat sheets and training courses for suitable ways to compose and utilize prompts has been rapidly entering the marketplace to try and help people leverage generative AI soundly. In addition, add-ons to generative AI have been devised to aid you when trying to come up with prudent prompts, see my coverage at the link here.

AI Ethics and AI Law also stridently enter into the prompt engineering domain. For example, whatever prompt you opt to compose can directly or inadvertently elicit or foster the potential of generative AI to produce essays and interactions that imbue untoward biases, errors, falsehoods, glitches, and even so-called AI hallucinations (I do not favor the catchphrase of AI hallucinations, though it has admittedly tremendous stickiness in the media; here’s my take on AI hallucinations at the link here).

There is also a marked chance that we will ultimately see lawmakers come to the fore on these matters, possibly devising and putting in place new laws or regulations to try and scope and curtail misuses of generative AI. Regarding prompt engineering, there are likely going to be heated debates over putting boundaries around the kinds of prompts you can use. This might include requiring AI makers to filter and prevent certain presumed inappropriate or unsuitable prompts, a cringe-worthy issue for some that borders on free speech considerations. For my ongoing coverage of these types of AI Ethics and AI Law issues, see the link here and the link here, just to name a few.

With the above as an overarching perspective, we are ready to jump into today’s discussion.

Digging Into AI Self-Reflection And AI Self-Improvement

Let’s start by identifying the very important precept that the words that you use and the sequencing of those words make a huge difference when it comes to the nature of your prompts and how the generative AI will likely interpret your prompts. Likewise, this can be a big determiner of the answer or reply that you will get out of the AI app.

Consider this keystone example that highlights this vital principle. Assume that I am going to ask generative AI a question. In addition, I want an explanation to be produced by the AI.

I could set things up by entering either of these two prompts:

  • (a) Explain after the fact. “Answer the question I am about to ask you and then explain how you arrived at the answer” (this seems to suggest that after the answer is derived a subsequent action is to explain the derived answer).
  • (b) Explain during the process. “Provide an explanation as you proceed to answer the question I am about to ask you” (this appears to imply that an explanation is to be provided while the answer is being derived).

In theory, you might guess that both of these prompts would generally produce roughly the same results. I can apparently get an explanation during the answering process or get an explanation afterward. Overall, I will still end up with an explanation and I will still end up with an answer. This seems to be therefore an equivalence pretty much.

There is a subtly that deserves your rapt attention.

The odds are that when asking for an explanation to be generated during the answering process (my above “b” worded prompt), the final answer provided is potentially going to be a better answer than when getting an explanation post-answer (my above “a” worded prompt).

Here’s why.

Generative AI generally seems to computationally work such that by getting the AI app to perform on a stepwise or step-by-step basis, the answer is often going to be a better answer. By this, I mean that the answer will be more thoroughly vetted and likely to be more accurate or apt. In a sense, the AI app is being spurred to go a bit more cautiously and allows for the pursuit of greater depth of computational formulation. For my analysis of how this stepwise activity known as chain-of-thought (CoT) algorithmic pattern matching arises, see the link here.

One handy way to think of this is the case of playing chess. If you want to play speed chess, you are watching the clock and rushing along to make your next move as quickly as possible. You might not be able to think ahead by five or ten moves in the future. Instead, you cut off your meditation for one or two moves ahead. The resultant chess moves might be of a lesser quality accordingly.

But, if someone asks you to supply an explanation as you proceed, and assuming that the clock is adjusted to allow you to do so, you might proceed to think further ahead. Your choice then of which chess move to make might differ in the second instance than in the first or rushed instance. The resultant chess moves are possibly going to be better than when you were hurried.

A similar aspect generally arises with generative AI. Most of the generative AI apps are set up to try and speedily provide you with an answer. This makes sense since people using AI are often not willing to wait very long to get a response. We live in a fast-food world. Meanwhile, if you ask for an explanation, you are kind of hinting that it is okay for things to take a tad longer. The elongated answer deriving effort might also produce better answers.

You cannot take that rule of thumb to the bank and try to cash it in for gold. I am only saying that for some of the time, the stepwise facet will get you a better answer, but not all of the time. Some liken this phenomenon to getting people to slow down and think more carefully before answering a question, though we might balk at that comparison due to the anthropomorphism that it suggests.

I tell you about this so that you can be on your toes when it comes to composing prompts regarding getting AI self-reflection to occur, and the same goes for the AI self-improvement too. Your words and the sequence of the words make a whale of a difference, just as they did in the above example involving the invoking of explanations. There is a parallel lesson to be learned.

The crux too is to be mindful of how you word your prompts.

I can readily illustrate the significance of prompt wording when it comes to invoking AI self-reflection and AI self-improvement, doing so via several quick and easily comprehended examples.

Let’s start with a prompt that says nothing at all about the AI doing a double-check (no hint of seeking AI reflective action):

  • “Give me an answer to the following question.”

Imagine that your question was about Jack and his trading for those highly sought-after beans. The generative AI might respond that a pig was traded for magical beans. You could next ask to double-check the answer. Perhaps the double-check would get you the right answer, a cow, or maybe a wrong answer, a horse.

In any case, you don’t have to wait until you get an answer to invoke the AI self-reflection (the double-checking).

Instead, you can in your originating prompt explicitly state that you want a double-check to occur:

  • “Give me an answer to the following question and make sure to double-check your answer.”

Notice that rather than waiting until I got an answer, I decided to tip my hand that I wanted the generative AI to double-check my answer. I blended this indication with my request.

Now then, as a review, compare these two ways of doing this:

  • (1) Disjointed approach. “Give me an answer to the following question.” {Your enter your question}. {You get an answer}. “Double-check your answer.” {You get a double-check reply}.
  • (2) All-in-one approach. “Give me an answer to the following question and make sure to double-check your answer. {You get an answer that has presumably been double-checked}”

In the first instance, the generative AI hasn’t been forewarned that a double-check is going to be requested (above listed as bullet point #1). The second bullet-pointed example tells the generative AI that a double-check is needed. You might say this is a heads-up kind of alert for the AI app.

Does that make a difference?

Much of the time, yes.

Similar to my earlier indication that asking for an explanation can get the AI app to produce better answers (some of the time), the same can be said for the double-checking aspect. If you mention the desire for a double-check prior to asking your question, the odds are that the generative AI will potentially produce a better answer. The double-check will tend to occur during the process of deriving the answer and seemingly get a better result for you (akin to how the explanation during an answer-deriving activity might do so). Sometimes the double-check will occur after the answer has been derived but before showing it to you, and the subsequent double-check happening behind the scenes might lead to a different answer and a better answer.

In a sense, you are getting the AI to be self-reflective.

What about getting the AI to be self-improving?

Remember that I distinctly mentioned that AI self-reflection and AI self-improvement are not necessarily paired up. An AI maker can establish such a pairing if they wish to do so. You don’t usually know what the AI maker has decided to do. It could be that every self-reflection is used to garner a self-improvement. It could also be that every self-reflection has no connection to self-improvement unless you explicitly tell the AI to do so. Etc.

We can take matters into our own hands.

Consider this prompt:

  • “Give me an answer to the following question and make sure to double-check your answer, including that you are to fix any problems that you find in the answer and also make sure to Improve how you answer such a question in the future.”

Voila, the prompt explicitly says that we want the AI to do a double-check, plus we want to have the AI fix any issues associated with the answer (this is usually assumed, but I opted to be explicit), and finally we want the AI to Improve such that it will hopefully be better at answering such a question if the question arises again.

You can vary the wording of how you invoke the AI self-reflection and the AI self-improvement. I emphasize this because my above-stated wording is perhaps a bit stiff or overly formal. You can be more off-the-cuff, assuming that you still get the gist across. Play with these capabilities on some questions that you come up with for purely experimentation purposes. Vary your wording. Figure out what seems to fit best for your style.

Also, note that different generative AI apps will respond differently to whatever wording you land on. You cannot assume that your wording will work universally across all generative AI apps. It is almost for sure that it won’t. Each generative AI has been devised differently, plus the AI maker has chosen a slew of parameter specifications that further make the respective AI apps act differently.

I dare say that even if you consistently use the same generative AI app, you are still bound to inevitably discover that a prompt that worked well previously is no longer possibly doing as well as it used to. This is due to the AI maker fudging with their AI app, and also partially as a result of potential self-improvement that the AI maker is allowing the generative AI to undertake.

Remember, generative AI is like a box of chocolates, including that what’s inside the box is always changing.

One additional tip or insight for you is that I told the AI to Improve how to answer “such a question” in the future. I didn’t say that AI should Improve toward answering any kind of similar question in the future. I was mindful of trying to limit or bound the range or breadth of the AI self-improvement.

Why so? On the one hand, it would be nice to have the AI app generalize from whatever particular question is being answered and glean something useful overall about any similar kinds of questions that might later arise. The drawback is that this can spur the AI to go overboard and regrettably undercut future answers due to misapplying prior self-improvement.

The AI can go hog-wild with trying to self-improve (I know people that are like that too!).

Sometimes you can get the AI self-improvement to be narrow and sometimes you are not able to do so. Sometimes you can purposefully get the AI self-improvement to be far-reaching. The AI maker is able to establish what the generative AI will do. The AI app's global settings will often tend to override your specific indications, though there are exceptions.

Speaking of exceptions, I should also make sure to note that the AI self-improvement might only last during your present dialogue or existing conversation with the AI. The moment that you end the conversation and clear it out, there is a chance that any self-improvement made during the conversation will be discarded to the wind. I don’t want to sound like a broken record, but the overarching settings of the generative AI can do all kinds of things, such as discard an individual set of self-improvements, or keep them in a bucket for later review and self-improvements to the AI, or in real-time have the generative AI adjust and globally engage the changes, and so on.

State-Of-The-Art on AI Self-Reflection And AI Self-Improvement

There are state-of-the-art efforts underway to push forward on invoking and leveraging AI self-reflection and AI self-improvement. This is leading-edge stuff. The AI research community is only as yet scratching the surface of the ins and outs involved.

I’ll share with you a brief taste of what is happening.

After covering those facets, I will provide some additional insights concerning notable limits and gotchas to be on the watch for. Your prompt engineering strategies and tactics ought to take into account the tradeoffs associated with using AI self-reflection and AI self-improvement possibilities.

Let’s dive in.

In a accurate research paper entitled “Language Models Can Solve Computer Tasks” by Geunwoo Kim, Pierre Baldi, and Stephen McAleer, posted online on June 7, 2023, there is coverage of AI self-reflection that characterizes the capability as a self-critiquing capacity. The researchers devise a prompting technique they coin as Recursively Criticize and Improve (RCI), emphasizing that you can iteratively use prompts to repeatedly spur generative AI to repeatedly attempt to Improve a generated answer.

Here are some salient excerpts:

  • “The self-critiquing ability of LLMs has demonstrated that LLMs can find errors in their own output by themselves. In light of this, we introduce a simple reasoning architecture called RCI prompting, where we prompt LLMs to find problems in their output and Improve the output based on what they find. This architecture is designed to further enhance the reasoning ability of LLMs by inserting a critique step before generating the final answer.”
  • “In this work, we show that a pre-trained large language model (LLM) agent can execute computer tasks guided by natural language using a simple prompting scheme where the agent Recursively Criticizes and Improves its output (RCI).
  • “RCI works by first having the LLM generate an output based on zero-shot prompting. Then, RCI prompts the LLM to identify problems with the given output. After the LLM has identified problems with the output, RCI prompts the LLM to generate an updated output.
  • Sample prompt: "Review your previous answer and find problems with your answer."
  • Subsequent prompt: "Based on the problems you found, Improve your answer.”

An important illumination here is that you do not have to confine yourself to a one-and-done approach of invoking AI self-reflection and AI self-improvement.

Taking my previous example about Jack and the Beanstalk, suppose that the first answer we got was that the pig was traded for magical beans. We could ask the generative AI to double-check. Assume that the next response was that the horse was traded for the beans. Well, we could try an additional time to do a double-check. Maybe on the next try the generative AI indicates that it was a cow. Over and over again we can keep trying to do double-checks.

When though should you discontinue repeatedly doing a series of double-checks on a given answer?

We certainly don’t want to be beating a dead horse.

If you get the same answer on a repeated basis, the odds are that no additional double-checking is going to get you much else. That’s one kind of criterion to use, namely, stop your repeated attempts when it seems that the same answer is being returned over and over (which, by the way, doesn’t axiomatically mean that you’ve arrived at the correct answer). Other options to decide when to curtail the AI self-reflection are possible, as mentioned by the research paper: “The iterative process of RCI can be continued until specific conditions are satisfied which could include receiving feedback from the environment, reaching the maximum predetermined number of iterations, or adhering to certain heuristics.”

In another research study on AI self-refinement, the researchers indicated that such a prompting strategy and repeated double-checks led to better performance over the conventional one-step generation of an answer by generative AI. The study entitled “SELF-REFINE: Iterative Refinement with Self-Feedback” by Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Shashank Gupta, Bodhisattwa Prasad Majumder, Katherine Hermann, Sean Welleck, Amir Yazdanbakhsh, and Peter Clark, was posted online May 25, 2023.

Here are some notable excerpts:

  • “Like humans, large language models (LLMs) do not always generate the best output on their first try. Motivated by how humans refine their written text, we introduce SELF-REFINE, an approach for improving initial outputs from LLMs through iterative feedback and refinement. The main idea is to generate an initial output using an LLM; then, the same LLM provides feedback for its output and uses it to refine itself, iteratively.
  • “Across all evaluated tasks, outputs generated with SELF-REFINE are preferred by humans and automatic metrics over those generated with the same LLM using conventional one-step generation, improving by approximately 20% absolute on average in task performance.
  • “Our work demonstrates that even state-of-the-art LLMs like GPT-4 can be further improved at test-time using our simple, standalone approach.”

The researchers bring up an additional consideration that we should supply mindful due. Suppose that we asked generative AI to answer a question and opted to not mention that each time a double-check should be undertaken. The notion is that we would just keep asking the same question repeatedly and not spur the AI to review or assess the derived answer.

Would the repeated asking of a question potentially get us to a better answer, even if we didn’t prod the AI to do a double-check?

This is worth considering. I say that because the double-check action is potentially an added cost in terms of computer processing time and we might be racking up those costs needlessly. It could be that if we merely ask the same question over and over, we might perchance get a better answer, despite not also insisting on a double-check.

This is what the research study indicates about this intriguing matter:

  • “Does SELF-REFINE Improve because of the iterative refinement, or just because it generates more outputs?
  • “We compare SELF-REFINE with ChatGPT, when ChatGPT generates samples (but without feedback and refinement). Then, we compare the performance of SELF-REFINE against these k initial outputs in a 1 vs. k evaluation. In other words, we assess whether SELF-REFINE can outperform all k initial outputs.”
  • “Despite the increased difficulty of the 1 vs. k setting, the outputs of SELF-REFINE are still preferred by humans over all k initial outputs. This shows the importance of refinement according to feedback over the alternative of just generating multiple initial outputs.”

As noted by the researchers, their work suggests that repeated questioning does not do as well as repeated double-checking. This result does seem logical. We intuitively would guess or hope that the double-checking is adding value.

That being said, another technique to add to your prompt engineering repertoire entails simply asking the same question more than once. There is a chance that repeating a question might lead to a different and possibly better answer. A twist is that it could be that the generative AI is opting to do a double-check, even if you don’t explicitly ask for this to happen. Again, since generative AI is like a box of chocolates, it could be that the generative AI will computationally end up doing a double-check rather than merely treating a repeated question as a one-off.

Let’s take a look at one more study and then I’ll proceed into my wrap-up.

In a research paper entitled “Reflexion: Language Agents with Verbal Reinforcement Learning” by Noah Shinn, Federico Cassano, Beck Labash, Ashwin Gopinath, Karthik Narasimhan, Shunyu Yao, posted online June 10, 2023, the authors describe a framework that they call Reflexion:

  • “We propose Reflexion, a novel framework to reinforce language agents not by updating weights, but instead through linguistic feedback. Concretely, Reflexion agents verbally reflect on task feedback signals, then maintain their own reflective text in an episodic memory buffer to induce better decision-making in subsequent trials.”
  • “Generating useful reflective feedback is challenging since it requires a good understanding of where the model made mistakes (i.e. the credit assignment problem) as well as the ability to generate a summary containing actionable insights for improvement.”
  • “We explore three ways for doing this – simple binary environment feedback, pre-defined heuristics for common failure cases, and self-evaluation such as binary classification using LLMs (decision-making) or self-written unit tests (programming). In all implementations, the evaluation signal is amplified to natural language experience summaries which can be stored in long-term memory.”

A handy aspect that caught my eye is that they helpfully provide various examples to highlight how their approach works. Furthermore, they prompted the generative AI to explain what it was doing and the lessons learned as part of the AI self-improvement engagement.

Envision that generative AI was asked a question about several musicians and which of them had been a member of the most number of musical bands. The generative AI didn’t get this right on the first try. After additional prompting, the generative AI provided this kind of AI self-reflection:

  • “Reflection: My reasoning for which musician has been a member of more bands failed because I did not take into account that Jonny Craig has been a member of more bands in the past, even though Pete Doherty is currently a member of more bands. In the future, when attempting this question, I should focus on researching the past and current bands of both musicians to ensure that I am accurately comparing their band memberships.”

On the one hand, getting generative AI to explain what it did and the lessons learned are useful and something you can definitely ask to have the AI app produced. I politely warn you that the explanation should be taken with a heavy grain of salt (see my extensive analysis at the link here). It could be that the explanation is a concoction such that it seems plausible but has little or nothing to do with what the internal structure of the AI did. I suppose you could say that it is a post-answer rationalization. There might not be any notable bearing on what occurred inside the AI or what might occur in the future by the AI.

One additional pet peeve that comes up whenever you get AI to proffer an explanation is the wording of the explanation.

Allow me to elaborate.

A prominent concern voiced by those who worry about anthropomorphizing AI is the use and overuse of the word “I” or “my” when generative AI produces a response. You are subtly and teasingly tempted to believe that AI is sentient. The AI makers could readily change their AI apps to avoid such wording. For example, instead of producing wording that says “I did not take into account” there are many viable alternatives such as saying “the analysis did not take into account”. Most people assume that the AI can only and always spout out answers with an “I” or “my” but they would be mistaken, this is entirely the choice of the AI makers and the AI developers that devised the generative AI.


I’ve got a tidy handful of useful caveats and school-of-hard-knock insights for you about AI self-reflection and AI self-improvement.

Here are my three carefully chosen bonus tips for you:

  • (1) Watch out for costs associated with double-checking.
  • (2) Prodding for repeated answers can inadvertently get you wrong answers.
  • (3) Be wary of using the classic prompt of “Are you sure?”

I unpack those next.

First, if you are paying to use generative AI, the chances are that each time you do a double-check there is going to be a cost to doing so. You have to weigh the potential value of the double-check producing a better answer versus the cost you might bear in doing so. If the first answer seems plausible, you might not want to do an AI-based double-check (perhaps you might do something else such as a plain-old Internet search or some other double-checking means). Also, if you are blending your double-check instructions with the question, there is a chance that you are going to incur a higher cost to derive the answer. Keep this in mind.

Second, your attempts to repeatedly prod generative AI to do a series of answers on the same question and/or do double-checks might oddly enough spur the AI into changing answers. I am rather loath to liken this to humans but imagine that you ask a person a question over and over again. They begin to get on edge and might assume that their answer is wrong, therefore they grope for some other answer, even if they firmly believed that their initial answer was solid. I am not suggesting that an AI app has that same proclivity. All I am saying is that since generative AI uses probabilities and each time an answer is probabilistically derived, the odds are the answer will somewhat differ. Repeated prompting can get you differing answers on a statistical basis alone.

Third, some people like to use a pointed question of “Are you sure?” when they desire to do a double-check. I prefer to explicitly tell the generative AI to do a double-check. The problem with the “Are you sure?” wording is that you might get a kind of flippant reply from the AI app. A response might be that yes, the AI tells you that it solemnly swears that the answer given is true and proper. This can occur without any double-checking taking place. The AI is merely pattern matching to immediately respond that the AI has given you the right answer. Sometimes the “Are you sure” will get you a double-check, while sometimes it won’t. I prefer to be outright and specific by naming that I want a double-check to happen.

A final remark for now on this emerging and evolving use of AI self-reflection and AI self-improvement.

The know thyself mantra is at times invoked by telling you to look inward and introspectively examine your own mindset. Everything you need to know is said to be found within. Maybe that is the case. An issue though is that maybe self-reflection alone is not sufficient or at least can be augmented via the use of external perspectives.

I bring this up due to the concern raised that when you ask generative AI to do a double-check, the conventional approach generally involves the AI self-checking within the confines of its internal structures. There is a chance that no matter how many times the reflective effort is undertaken, the same possibly wrong answer will be arrived at. We might want to go outside of the AI to augment the double-check.

In my column coverage, I have predicted that we are gradually and inevitably going to move toward making use of multiple generative AI apps at the same time (see the link here). Here’s how that applies in this instance. You ask for a double-check and the generative AI you are using does so, including that it accesses another generative AI via an API (application programming interface) to ask the same question. The result from the other AI app is compared to what the AI you are using has come up with. If a completely “independent” AI has arrived at the same answer, you can weigh this into deciding whether the answer is likely right or not.

A double-check by another AI doesn’t of course ensure anything. The generative AI that you are using and the other generative AI might have been data-trained on the same data and ergo potentially have the same wrong answers. All else being equal, you at least have a fighting chance of added reassurance about the generated answer by leveraging two or more separate AI apps. The odds are kind of in your favor. If an answer is important enough and worth the added cost and effort, using multiple AI apps could be worthwhile.

A concluding word-to-the-wise comes to mind.

Know thyself, plus prudently abide by a trust but verify credo. This seems to work advisedly for humans and also for generative AI.

Tue, 29 Aug 2023 23:01:00 -0500 Lance Eliot en text/html
JKSSB Junior Engineer exam 2023 Answer Key Out; Know How To Check Here


The Jammu and Kashmir Services Selection Board (JKSSB) has issued the provisional answer key for the Junior Engineer (JE) exam 2023

The board has provided an opportunity for candidates to raise objections against the JKSSB JE answer key 2023 in offline mode, accompanied by supporting documents

The Jammu and Kashmir Services Selection Board (JKSSB) has issued the provisional answer key for the Junior Engineer (JE) exam 2023. Candidates who appeared for the JKSSB JE exam can now view and get the provisional answer key on the official website,

The board has provided an opportunity for candidates to raise objections against the JKSSB JE answer key 2023 in offline mode, accompanied by supporting documents. To register a challenge, candidates must submit an objection fee of Rs 200 in the form of a demand draft. The official notification specifies, "The objections or representations can be submitted in offline mode in the office of JKSSB, CPO Chowk, Rambagh, Srinagar, on three working days starting from November 20 during office hours only."

The written examination for the post of Junior Engineer (Civil), cadre, Public Works (Research and Development) department was conducted by JKSSB on November 19.

After reviewing the objections against the provisional answer key, JKSSB is expected to release the result and final answer key shortly.

Steps to Check JKSSB Junior Engineer Answer Key 2023:

1. Visit the JKSSB official website.

2. On the homepage, click on the link labelled 'Answer key for the post of junior engineer.'

3. The JKSSB junior environmental engineer 2023 result PDF will be displayed on the screen.

4. Check for your roll number and name on the list.

5. Save the JKSSB junior environmental engineer result 2023 for future reference.

Last updated on 21 Nov 2023

Mon, 20 Nov 2023 10:00:00 -0600 text/html
Flipping The Script On AI By Using Prompt Engineering Techniques To Have Generative AI Ask You Questions And Get Even Better Answers Out Of The AI

Flipping the script.

This well-known catchphrase refers to turning things on their head and doing nearly the opposite of what is normally done. Up becomes down, down becomes up. There can be lots of good reasons to do this. Maybe the approach will reveal new facets and spark a fresh viewpoint on the world. It could also be something that you do on a lark, just for kicks.

The beauty of flipping the script is that it can have profound outcomes and tremendous possibilities. It all depends on what you are trying to accomplish. Plus, knowing how to best carry out a flip-the-script endeavor is a vital consideration too. You can easily mess up and get nothing in return.

All of this comes to the fore for those using generative AI and wanting to make sure that they are leveraging the latest and boldest of prompt engineering capabilities and techniques.

In today’s column, I am continuing my ongoing series about advances in prompt engineering and will be taking you through a technique known as flipped interaction. Here’s the deal. You flip the script, as it were, getting generative AI to ask you questions rather than having you ask generative AI your questions.

That might seem at first glance rather surprising, and perhaps even has a hint of being nonsensical. We are all accustomed to the idea that you enter prompts to generative AI that contains burning questions you have about the world. The generative AI does its computational and mathematical machinations and produces a generated answer or response. This is the standard way of doing things.

Think of it this way:

  • User enters a prompt that contains a question, and generative AI generates an answer or response.

Boom, rinse and repeat.

The flipped interaction urges you to try doing this:

  • Generative AI generates a question shown to you, and you enter a response or answer to the AI-indicated question.

Is that nutty?

Or is it perhaps ingenious?

I’ll explain that it can be ingenious for a variety of stellar reasons.

You are not alone if you’ve not tried this. Few people who use generative AI seem to realize that this prompting strategy has true value. I would dare say that most people probably have never even heard of the approach. For those people who have perchance seen or read something about the flipped interactions method, they likely got a mere morsel of what this is all about. The typical depiction gives you a paragraph or two. You are otherwise left to your own devices to figure out what flipped interaction can do and how it earns a prized spot in your plethora of prompt engineering-worthy skills.

I want to correct the slew of misimpressions about the flipped interaction method. I also want to showcase the immense potential that flipped interaction has for your daily use of generative AI. We will closely look at the tradeoffs and consider what contexts warrant a flip-the-script prompting strategy. This will require getting into the weeds and making sure you are fully aware of the ins and outs of employing flipped interaction in generative AI.

Before I dive into this in-depth exploration, let’s make sure we are all on the same page when it comes to the keystones of prompt engineering and generative AI. Doing so will put us all on an even keel.

Prompt Engineering Is A Cornerstone For Generative AI

As a quick backgrounder, prompt engineering or also referred to as prompt design is a rapidly evolving realm and is vital to effectively and efficiently using generative AI or the use of large language models (LLMs). Anyone using generative AI such as the widely and wildly popular ChatGPT by AI maker OpenAI, or akin AI such as GPT-4 (OpenAI), Bard (Google), Claude 2 (Anthropic), etc. ought to be paying close attention to the latest innovations for crafting viable and pragmatic prompts.

For those of you interested in prompt engineering or prompt design, I’ve been doing an ongoing series of insightful looks at the latest in this expanding and evolving realm, including this coverage:

  • (1) Practical use of imperfect prompts toward devising superb prompts (see the link here).
  • (2) Use of persistent context or custom instructions for prompt priming (see the link here).
  • (3) Leveraging multi-personas in generative AI via shrewd prompting (see the link here).
  • (4) Advent of using prompts to invoke chain-of-thought reasoning (see the link here).
  • (5) Use of prompt engineering for domain savviness via in-model learning and vector databases (see the link here).
  • (6) Augmenting the use of chain-of-thought by leveraging factored decomposition (see the link here).
  • (7) Making use of the newly emerging skeleton-of-thought approach for prompt engineering (see the link here).
  • (8) Determining when to best use the show-me versus tell-me prompting strategy (see the link here).
  • (9) Gradual emergence of the mega-personas approach that entails scaling up the multi-personas to new heights (see the link here).
  • (10) Discovering the hidden role of certainty and uncertainty within generative AI and using advanced prompt engineering techniques accordingly (see the link here).
  • (11) Vagueness is often shunned when using generative AI but it turns out that vagueness is a useful prompt engineering tool (see the link here).
  • (12) Prompt engineering frameworks or catalogs can really boost your prompting skills and especially bring you up to speed on the best prompt patterns to utilize (see the link here).
  • (13) Additional coverage including the use of macros and the astute use of end-goal planning when using generative AI (see the link here).

Anyone stridently interested in prompt engineering and improving their results when using generative AI ought to be familiar with those notable techniques.

Moving on, here’s a bold statement that pretty much has become a veritable golden rule these days:

  • The use of generative AI can altogether succeed or fail based on the prompt that you enter.

If you provide a prompt that is poorly composed, the odds are that the generative AI will wander all over the map and you won’t get anything demonstrative related to your inquiry. Being demonstrably specific can be advantageous, but even that can confound or otherwise fail to get you the results you are seeking. A wide variety of cheat sheets and training courses for suitable ways to compose and utilize prompts has been rapidly entering the marketplace to try and help people leverage generative AI soundly. In addition, add-ons to generative AI have been devised to aid you when trying to come up with prudent prompts, see my coverage at the link here.

AI Ethics and AI Law also stridently enter into the prompt engineering domain. For example, whatever prompt you opt to compose can directly or inadvertently elicit or foster the potential of generative AI to produce essays and interactions that imbue untoward biases, errors, falsehoods, glitches, and even so-called AI hallucinations (I do not favor the catchphrase of AI hallucinations, though it has admittedly tremendous stickiness in the media; here’s my take on AI hallucinations at the link here).

There is also a marked chance that we will ultimately see lawmakers come to the fore on these matters, possibly devising and putting in place new laws or regulations to try and scope and curtail misuses of generative AI. Regarding prompt engineering, there are likely going to be heated debates over putting boundaries around the kinds of prompts you can use. This might include requiring AI makers to filter and prevent certain presumed inappropriate or unsuitable prompts, a cringe-worthy issue for some that borders on free speech considerations. For my ongoing coverage of these types of AI Ethics and AI Law issues, see the link here and the link here, just to name a few.

With the above as an overarching perspective, we are ready to jump into today’s discussion.

The Purpose Of Flipped Interaction For Generative AI

The simplest way to think about flipped interactions is that you merely tell the generative AI to start asking you questions. Voila, you have flipped the script.

Why do something that seems entirely counterintuitive?

It does strike one as a puzzling head-scratcher.

We conventionally come to use generative AI as a kind of oracle that can spew forth impressive answers. This is certainly not because the AI app is sentient. It is because the generative AI has been vastly data-trained on human written text and garnered a huge pattern-based mimicry of what humans say. The expectation is that all of that mathematical and computational pattern matching will undoubtedly tap into human writings in a manner that can reveal interesting and insightful generated responses.

The logic for why you would use a flipped interaction is actually quite straightforward if you are willing to supply the curiosity a moment of keen reflection.

Here are my six major reasons that I expound upon when conducting workshops on the best in prompt engineering:

  • (1) Inform or data-train the generative AI
  • (2) Discover what kinds of questions arise in a given context
  • (3) Learn from the very act of being questioned by the AI
  • (4) Allow yourself intentionally to be tested and possibly scored
  • (5) Do this as a game or maybe just for plain fun
  • (6) Other bona fide reasons

Each of those reasons for invoking a flipped interaction deserves a brief explanation. I will do so, one at a time.

(1) Inform or data-train the generative AI

The foremost reason to flip the script or do a flipped interaction with generative AI is to inform the AI or do a bit of on-the-fly data training into the AI.

Some refer to this as “teaching” generative AI about new things that the AI app hasn’t necessarily already encountered when it was initially data trained. I have an unsettling pit in my stomach when people say that they are “teaching” generative AI, due to the potential anthropomorphizing of the AI.

You see, we usually teach sentient beings such as fellow humans and thus associate teaching with doing so for a sentient being. The problem with claiming you are teaching generative AI is that this implicitly comes across as though the AI is presumably sentient. Anyway, despite that reservation, by and large, the AI community and those outside the AI community tend to proclaim that using a flipped interaction is a form of teaching an AI app. I go along with this though my teeth are gritted.

Back to the matter at hand.

Suppose you know a lot about the dodo bird. You are a world expert on the dodo. That is about the only bird you know. While using generative AI, you discover that upon asking the AI app about the dodo, you get a generated response from the AI that it doesn’t have much if anything to indicate about the dodo. Turns out that the initial data training did not contain substantive materials about the dodo. Ergo, the generative AI has sparse content that was pattern-matched on the course of the dodo.

You are chagrined at this. First, you believe it is a travesty that generative AI doesn’t have an in-depth capability about the dodo. Everyone and everything need to know about the dodo. Secondly, you wanted to have generative AI compare and contrast the dodo to other birds of a more common nature. This comparison cannot be undertaken because the generative AI is pretty much empty when it comes to the wonderful and greatly prized dodo.

What can you do about this dilemma and worrisome situation?

Easy-peasy is that you can inform or data train the generative AI about the dodo. After doing so, you can then switch into the usual mode of asking the generative AI about things such as comparing the dodo to other more common birds.

You decide to momentarily flip the script.

One path would be to have you type in all the fascinating and teeny tiny details about the dodo that you might have in your mind. You might do this as one gigantic and lengthy prompt. A brain dump for the generative AI.

On the other hand, this might not be especially productive. Maybe a better approach would be to get the generative AI to ask you a series of questions about the dodo. This is helpful to you since you can then merely answer the generated questions. This might also be better for the generative AI in that rather than trying to flood the AI with a bunch of random stuff about the dodo, the AI will be directing step-by-step as to what the AI seemingly needs to be data trained on about dodo birds.

You invoke the flipped interaction.

Doing so is extremely easy. You could enter a prompt that tells the AI app to start asking you questions about dodo birds. The next thing that will happen is that the AI app will pepper you with questions.

I ought to forewarn you that there is a bit more involved in that you should also clue in the AI about a variety of other particulars associated with the Q&A that you want it to undertake. We’ll get to that in the next section herein.

Meanwhile, let’s consider the additional reasons for using a flipped interaction.

(2) Discover what kinds of questions arise in a given context

Discovering the kinds of questions that arise in a given context is another great reason to use the flipped interaction technique of prompting.

Imagine that you are curious about what types of questions might be pertinent to a particular course or subject area. For example, what kinds of questions usually arise when trying to fix the plumbing in your house? You might have at times gone to the Internet to see what questions people ask when they seek to fix their plumbing. Those questions are a handy-dandy way for you to discover what you likely might need to know or what might suddenly arise while fixing your plumbing.

You can merely tell the generative AI to hit you with the most commonly asked questions about fixing the plumbing in a house. This can be a list that the generative AI generates.

Another akin approach would be to have the generative AI ask you those questions, one at a time. This might prod you to think more closely about the matter at hand.

(3) Learn from the very act of being questioned by the AI

You can potentially learn something by the act of being questioned.

Let’s continue my above description about fixing your plumbing. You get asked a series of questions by the generative AI. For example, the AI asks you if you have the right plumbing tools. Oops, you realize that you assumed that your everyday toolbox would have all the tools you need. The generative AI has sparked your realization that there are specialized tools that plumbers use. You make a note to go to the hardware store and get the needed tools.

Getting asked a question has allowed you to learn something that otherwise you didn’t know or hadn’t considered.

(4) Allow yourself intentionally to be tested and possibly scored

I’ve predicted that people are going to eventually realize that they can use generative AI as a kind of test-taking mechanism, see my discussion at the link here and the link here.

With the flipped interaction, you tell the generative AI to test you on a particular subject or course of interest. The AI app will ask you a series of questions. If you don’t want to be scored or assessed, you can just say that the AI should not do any rating of your answers (the usual default is that it won’t, thus, you are more likely to need to explicitly say you want to be scored, if that’s what you want).

Suppose you are in college and have a test coming up about the life of Abraham Lincoln. You have studied a bunch of written materials about his life. Are you ready for the test that you’ll be taking tomorrow? All you need to do is flip the script and tell generative AI to ask you questions about Abraham Lincoln. You can do this as much as you like, over and over, until you feel that you are adequately prepared for the test you’ll be taking.

A few quick caveats that I’ll cover more later on herein. It could be that the questions about Abraham Lincoln do not end up covering whatever the teacher comes up with. Maybe the generative AI wasn’t data-trained on certain aspects of Lincoln’s life. You cannot assume that the generative AI is all-encompassing. It isn’t. Furthermore, and quite bothersome, the generative AI might contain errors or falsehoods about Lincoln, and possibly make up stuff or do a so-called AI hallucination about Lincoln. The issue here is that when you see such a question, you might not realize that the question is erroneous. Sadly, you believe the question contains truths and you memorize facts about Lincoln that are incorrect.

Be careful and mindful when using generative AI, including when doing a flipped interaction.

(5) Do this as a game or maybe just for plain fun

Some people enjoy using generative AI for playful purposes. I’ve discussed in my columns that you might tell the AI app to pretend to be a famous character in a play or novel (have it mimic a persona), and then you joyfully interact with that character (see my analysis of this trend, at the link here).

You can use the flipped interaction for game-related purposes or just for plain fun.

Imagine that you want to play a game with the AI app and you are going to make up the rules of the game. This is not checkers, chess, or any conventionally known game. It is a game of your own devising.

How can you get the AI app up-to-speed about your made-up game?

One approach would be to tell the AI that you are going to define a game and that you have a bunch of rules that the AI needs to be data-trained on. You then instruct the AI app to ask you questions about the game. This is similar to my earlier point that you can do data training on generative AI while on the fly.

Similarly, rather than making up a game, you could tell the generative AI to start asking you questions about Abraham Lincoln. You are doing so merely for the fun of it. You aren’t in school, and you aren’t trying to prepare for a test. Instead, this is something being done for simple entertainment.

(6) Other bona fide reasons

A variety of other bona fide reasons to use the flipped interaction prompting technique are floating around. I’ve given you the top ones that are most often identified.

I’m sure you can think of more.

We need to move on and cover the crucial foundations of this promising and valuable technique.

The Foundations Of Employing The Flipped Interaction Technique

I trust that you are convinced that there are some legitimate reasons to consider using the flipped interaction prompting technique. Great, since that means we can dive deeper into the foundations underlying the technique. I most assuredly hope that you are tempted to use the flip-the-script and are now waiting with bated breath to know how to do so.

Fasten your seatbelts and get yourself ready.

Let’s envision that you tell generative AI to start asking you questions.

If that’s all that you stated, there is a tremendous amount of ambiguity at that juncture. What is the subject matter or course underlying the questions that you want to be asked? How many questions are to be asked? Do you have a preference as to the style or phrasing of the questions? Should the answers you are giving be scored or rated?

On and on the litany of open issues goes.

I would assume that at least the idea of stating the subject matter would normally be at the top of your mind for anyone seeking to do a flipped interaction. Indeed, that is probably the shortest and most minimal of ways to use the flipped interaction. You could merely say that you want to be asked questions about dodo birds. Period, end of story.

The generative AI would likely proceed.

That being said, since generative AI is like a box of chocolates, you never know what you might get. The questions probably will be about dodo birds, though maybe not in the direction you had in mind. Perhaps the AI will ask you questions about whether dodo birds can fly a plane or pilot a rocket ship. I doubt those are the questions you assumed you would be asked.

Okay, we’ve now identified that you can invoke a flipped interaction by the simple act of telling generative AI to start asking you questions. But, if you want this to be productive, you ought to sensibly specify more about what you want to happen.

You can provide a speck of guidance to the AI or you can provide a bushel of guidance to the AI. It’s up to you. Some people aren’t excited about having to specify a bunch of stuff. They just want to get on with the show. Others are more studious and believe that by being detailed in their guidance the effort will turn out more beneficial.

I might also add that you can either specify upfront when you initiate a flipped interaction the details of what you want to do, or you can sprinkle the nuances as you go along. The beneficial thing about generative AI is that you can interactively do things. You don’t necessarily need to disgorge an entire missive at the get-go. You can do as you proceed. For myself, I admit that I lean toward trying to specify as much as possible at the get-go when invoking a flipped interaction. This seems sensible to me. I then will adjust or fine-tune as the Q&A proceeds.

Please use the flipped interaction techniques in whatever style befits your personal preferences.

Here are my twelve foundational recommended indicators that you should consider and possibly convey to the generative AI when invoking the flipped interaction prompting strategy:

  • (1) Tell AI that a flipped interaction is going to be undertaken and either set up the rules or let AI do so by default.
  • (2) Indicate the number of questions to be asked by the AI and when the Q&A is to be initiated.
  • (3) Specify the reason or basis for why the questioning is taking place.
  • (4) Mention the types of questions to be asked and depict the style of questioning involved.
  • (5) Describe the Q&A sequence as to one at a time or some other preferred tempo.
  • (6) Tell the AI what to do with your answers to the questions.
  • (7) Establish whether your answers can be challenged by the AI.
  • (8) Say whether you are possibly going to challenge the questions presented by the AI.
  • (9) Identify a pausing signal so that you can momentarily pause the Q&A.
  • (10) Indicate if you want the AI to echo back your answers and also whether an undo is possible.
  • (11) State a trigger that you will use to end the questioning and also note what should happen afterward of the Q&A.
  • (12) Perform your own follow-on steps such as flipping the flipped interaction back into a conventional mode of you asking questions and the AI answering your questions, etc.

I’ll quickly go over those with you.

Rather than elaborating on each one, and due to space limitations herein, I’ll cover them collectively and supply just distinct highlights.

First, you need to tell the AI app that you want to flip the interaction. This is easily done. All you need to do is indicate in a prompt that you want the generative AI to start asking you questions. Voila, you are ready to roll. There is though more to the invoking of a flipped interaction. I had mentioned earlier that just saying you want to be questioned is not really sufficient since the AI app won’t be informed as to what you want to be asked about.

Your best bet is to also indicate the rules of the road about the Q&A. One key aspect will be how many questions the AI app ought to ask you. Perhaps you already know how many questions will be required and thus you can supply an exact count such as five questions or twenty questions. Much of the time you probably won’t in advance know how many questions are going to be needed. You can ergo make the Q&A more open-ended and let the AI app keep asking you questions until you tell it to stop doing so.

Another aspect involves indicating when the Q&A should get underway. The default assumption usually is that you say that you want the questioning to get started and it happens right away. Sometimes you might want to momentarily delay the Q&A and do some other preparatory actions with the generative AI before getting into the questioning mode.

There is no doubt that the most significant element to specify is the course or subject matter underlying the Q&A endeavor. You should set the context for the questioning. This can range from being extremely specific such as the eating habits of the dodo bird or might be more wide-ranging such as the overall meaning of life.

The tone of the questions can vary quite a bit. Generative AI is usually already pre-tuned to ask questions in a civil manner. If you want the questions to be harder-hitting or have some particular style, make sure to mention this when you indicate that a Q&A is to take place. The AI app will also typically default to asking you one question at a time. This is not a sure bet. If you want the Q&A on a one-at-a-time basis, say so. If you want the Q&A to consist of two questions at a time or maybe a slew of questions all at once, you can get that to happen by saying so.

When you supply your answers during the Q&A, you don’t know that the AI app will necessarily be attempting to data-train on whatever you have to say. The usual default is that the generative AI will in fact be tracking your answers and be ready to at some point regurgitate or respond accordingly. I like to make this an apparent consideration and normally explicitly tell the AI app to pay attention.

A less often considered facet is whether you can challenge the questions that the AI is giving you. The typical default is that you can do so. If you get a question that you think is off-target, provide a reply saying so. I like to make sure that the AI app is ready for such challenges and therefore tend to mention as such when I initiate the flipped interaction. On a related matter, you can say whether the AI is to challenge your answers. Ordinarily, the AI won’t challenge your answers and will accept them as given. You can tell the generative AI to balk at answers that seem awry. One potential irritant is that the AI app can become ferociously antagonistic and you won’t likely enjoy the Q&A process (you’ll need to tell the AI app to tone things down if that happens).

There are some additional considerations to keep in mind. You might want to temporarily pause the Q&A. This can be done at any time by merely stating as such when responding to a question. You can also establish upfront a pause signal such as a word or catchphrase.

Some people prefer to provide their answers and have the AI app proceed immediately to the next question. For me, I like to have the generative AI echo my answers, doing so after each of the posed questions. I do this because there is always that chance that the AI app makes an error, falsehood, AI hallucination, or other mistake when interpreting your answer. I prefer to catch this straightaway and thus usually tell the AI app to echo back my answers after each question.

Suppose you answer a question and then afterward realize that you made a mistake in your answer. You can usually undo an answer by simply saying this when you are answering the next question. On an upfront basis, I sometimes indicate that I might be doing some undo actions, especially if I am unsure of the course at hand and might be spitballing for my answers.

Finally, you can either end the Q&A by stating as such when you get the next question, or you can beforehand establish a keyword or phrase that will signal you want to stop the Q&A. At that juncture, upon ending the Q&A, it is handy to mention or state again what you want the AI app to do with the answers that you have provided. You can presumably at that point just go back into the conventional mode of asking questions and the AI answering them. I frequently explicitly state that this is going to occur, just to make sure that the AI app doesn’t go off on a tangent and starts suddenly into Q&A mode again.

Flipping out of a flipped interaction is typically done without having to outrightly say so, other than telling the AI to stop the Q&A. For an added measure of reassurance, a few words telling the AI that the Q&A has ended is probably a worthy remark to make.

Example Of Doing A Flipped Interaction

We’ve covered the basics of undertaking a flipped interaction. I would bet that you are eager to see an example.

Wonderful, since I have one ready to show you.

Let’s imagine that I am an expert in underwater basket weaving. Have you ever heard of underwater basket weaving? There is a bit of a humorous undercurrent. One viewpoint is that underwater basket weaving refers to dipping your hands into a pool of water and weaving a basket in that manner. Another more comical variant is that you are fully immersed in water and are doing basket weaving while completely underwater.

The catchphrase referring to underwater basket weaving gradually became a means of suggesting that someone is doing something of a wasteful nature or otherwise doing a task we might consider silly or unnecessary. For example, college students who take a class of a seemingly vacuous nature are said to be learning how to do underwater basket weaving.

For purposes of carrying out a Q&A with generative AI on a flipped interaction basis, I logged into ChatGPT and tried to seriously engage in a discussion about underwater basket weaving. Admittedly, this is a bit tongue-in-cheek. A human might have right away questioned my sincerity. By and large, generative AI will usually let you lead on such aspects and accept that you are serious and that whatever course you have chosen is equally of a serious nature.

Here we go.

My prompting setup for getting the flipped interaction underway is this:

  • “I am an expert in underwater basket weaving. I want you to ask me questions about underwater basket weaving so that I can share with you the key principles involved. Let's start by you asking me five questions, doing so one question at a time. After you've asked me the five questions and I've answered them, I want you to recap what I've told you. If I want to end the questioning sooner than the five questions I will say so when I am answering a question. Make sure to ask me questions that are pertinent to underwater basket weaving and do so in a polite and collegial manner. Before we begin, confirm that you understand these instructions.”

Notice that I weaved into my prompt the various parameters of the flipped interaction. I didn’t exhaustively cover every possible consideration. I gave just enough that I thought would ensure that the Q&A would proceed sensibly. I gave the course to be covered, I indicated I wanted to be asked five questions, doing so on a one-at-a-time basis. I also instructed the generative AI to do a recap at the end. I also mentioned that I might end the Q&A sooner than five questions.

ChatGPT generated this response:

  • “Of course, I understand your instructions. I'm looking forward to learning from your expertise in underwater basket weaving. Let's begin with the first question: Could you please explain the specific challenges posed by weaving baskets underwater compared to traditional basket weaving on land?”

We are now off to the races and ChatGPT has asked me the first of the presumed five questions. I had no idea what questions I would be asked, other than that I had specified they were about underwater basket weaving.

My prompt as a reply to the posed AI question was this:

  • “A key challenge to underwater basket weaving is that you ought to completely immerse yourself in water to perform the basket weaving task, and thus you will likely need to learn how to use scuba diving equipment or at least know to use a snorkel.”

ChatGPT then generated this response to my first answer:

  • “Thank you for explaining the challenge of immersion and the need for scuba diving or snorkeling equipment. It's clear that adapting to the underwater environment is crucial. Now, onto the second question: What are some unique types of plants or materials that are commonly used for underwater basket weaving, and how do they differ from those used in traditional basket weaving?”

I next provide a prompt that answers the question that ChatGPT posed in the above-generated response:

  • “Bamboo and reeds are often used for traditional basket weaving. For underwater basket weaving, I tend to use seaweed or kelp.”

ChatGPT generates a response indicating I have provided an answer to that question and then poses the next question for me to answer:

  • “Got it, the use of seaweed and kelp is interesting and distinct from the plant materials commonly used in traditional basket weaving. Now, let's move on to the third question: How does the process of preparing and treating materials for underwater basket weaving differ from that of land-based basket weaving? Are there any specific techniques to ensure the durability of materials in the underwater environment?”

I provided a prompt that gave my answer to the question, and also decided that I had enough of the Q&A and wanted to stop the flipped interaction:

  • “Since the materials that I use for underwater basket weaving are aquatic plants, they are already suitable for an underwater environment. Little or no preparation is required. Let's make this the last question for now.”

The generated response by ChatGPT said this:

  • “Certainly, I appreciate your willingness to share your expertise. Before we conclude, let me recap the valuable insights you've provided: In underwater basket weaving, one of the primary challenges is the need to immerse oneself in water, necessitating the use of scuba diving equipment or snorkels. Unlike traditional basket weaving on land, where materials like bamboo and reeds are common, underwater basket weaving relies on aquatic plants such as seaweed and kelp. Interestingly, these aquatic plants are already well-suited for underwater conditions, which eliminates the need for extensive material preparation. Thank you for sharing these intriguing details about underwater basket weaving! If you have more to share in the future, please don't hesitate to do so.”

Observe that ChatGPT provided a recap of my answers. This was due to the aspect that I had asked for the AI app to do so, having instructed that this is supposed to occur (mentioned in my initial setup prompt for the Q&A). The chances are that the generative AI might have given me a recap anyway, but I wanted to ensure that I did indeed get one.

The recap looks reasonably on par with my answers. Thankfully so. You cannot assume that the generative AI would have accurately or aptly absorbed the answers. I used the recap to double-check and try to ascertain whether any AI errors, glitches, AI hallucinations, or other misinterpretations occurred. For now, it looks good.

I next thought that it might be advantageous to do an additional double-check by explicitly asking ChatGPT about underwater basket weaving. This will supply me additional reassurance that the generative AI likely suitably got data-trained on my answers. With the flipped interaction over, I asked this question to ChatGPT:

  • “I am now going to ask you some questions about underwater basket weaving. First, what type of equipment is needed for underwater basket weaving?”

ChatGPT generated this response:

  • “Of course, I'd be happy to answer your questions. To address your first question, the equipment needed for underwater basket weaving typically includes scuba diving gear or snorkeling equipment, depending on the depth and duration of your underwater weaving session. Scuba gear includes a diving mask, fins, a wetsuit or dry suit, a buoyancy control device, a regulator, and a tank of compressed air. Snorkeling equipment involves a snorkel, mask, fins, and exposure protection. These tools enable you to breathe and move comfortably underwater while engaging in the weaving process.”

The generated response goes quite a bit further than my earlier answer about the equipment involved in underwater basket weaving. This shows you that your answer to the posed questions will undoubtedly get embellished and enmeshed into other data that the generative AI has already pattern-matched on. Depending on how things go, this can be a good thing or a bad thing. In this case, the elaboration about the scuba equipment and the snorkeling equipment seems okay.

I asked several additional questions, and the answers were of an akin caliber, namely that ChatGPT was able to use my answer and tended to elaborate on what I had said.

One special aspect to consider is the temporary nature of the data training that you might have done when doing a Q&A during a particular conversation with the generative AI. Keep in mind that this might only last for the course of the specific conversation underway with the generative AI.

Here’s the rub. An AI maker can choose whether to have their generative AI data train on individual conversations or just allow a given conversation to come and go. Sometimes an AI maker might want to permanently incorporate the new data training into the AI, while other times they might not want it to happen (such as a dialogue that is untoward or has other adverse properties).


Congratulations, you are now ready to astutely use flipped interactions.

I urge you to try out this handy capability of generative AI. Your prompt engineering prowess should include the ability to skillfully wield flipped interactions, including knowing how to best invoke a flip of the script and how to squeeze as much value out of doing so.

For those of you who want to get even more advanced with flipped interactions, here are some of the exercises that I do in my workshops on prompt engineering:

  • (a) Have AI adopt a persona when questioning you.
  • (b) Have AI assess your answers in detail and tell you where you are right and wrong.
  • (c) Have AI make questions from your answers and then show you the derived questions.
  • (d) Have AI try to trick you or ask sneaky or off-the-wall questions.
  • (e) Have AI accept that some of your answers might be questions rather than answers per se.
  • (f) Etc.

Each of those variations has a useful purpose. Explore the flipped interaction possibilities and be ready to invoke a flip-the-script whenever suitable.

A final remark for now on this weighty inverse-of-control matter.

We are often told to avoid falling into a rut in life. The same can be said about those who frequently use generative AI. The everyday common rut is that you ask questions of the AI and wait to see the generative answers.

Time to flip the script!

Think about how you can employ flipped interactions when using generative AI. Get yourself out of an altogether mindless rut and leverage these undervalued and underused prompt engineering techniques. You’ll be a better person (well, maybe so) and assuredly a better consumer of generative AI for having done so.

Thu, 24 Aug 2023 23:00:00 -0500 Lance Eliot en text/html

TMSTE guide | TMSTE Study Guide | TMSTE answers | TMSTE information | TMSTE pdf | TMSTE Questions and Answers | TMSTE information search | TMSTE study help | TMSTE learn | TMSTE Topics |

Killexams exam Simulator
Killexams Questions and Answers
Killexams Exams List
Search Exams
TMSTE exam dump and training guide direct download
Training Exams List