let's face it.

fc:2036-6531-7492 | prince of rage. polymath. misanthrope. architorture student. digidestined. little monster. artist. idealist. omniscient. pokémon trainer. aquarius. reader. dreamer. shaman. hematophagous. free spirit. music listener.


Ask.  
Reblogged from shieldmaidenofsherwood

shieldmaidenofsherwood:

darkarcherprince:

shieldmaidenofsherwood:

how to be seductive:

  • head tilt
  • hooded eyes
  • raised eyebrow
  • little smirk

how to be evil:

  • head tilt
  • hooded eyes
  • raised eyebrow
  • little smirk

do you see the problem

http://media.tumblr.com/efcb2bf88426110bd076a8d4e7deb536/tumblr_inline_mu7w38IYFl1qf1wgx.gif

This might win for favorite addition to my post.

(via addictedartist)

Reblogged from likeacooldude
  • Before jerking off : i would literally suck 6 dicks rn
  • After jerking off : men are gross
Reblogged from bakrua

t3hsiggy:

bakrua:

ah yes the first pokemon battle of the game

tackle tackle tackle tackle tackle

"Enemy Bulbasaur used Growl"

"HA, YES, YOU FOOL, YOU HAVE FALLEN RIGHT INTO MY TRAP, FOR NOW I SHALL DEAL AN EXTRA TURN OF DAMAGE MORE THAN YOU”

(via cronvirus)

Reblogged from cubebreaker

cubebreaker:

Japan’s Nabana no Sato Botanical Garden used over 7,000,000 LED lights to create this amazing tribute to nature featuring displays of rainbows, auroras, and Mt. Fuji.

(via nitevid)

Reblogged from darkmatchwaldo

Reblogged from phantomrose96

(Source: phantomrose96, via cipherface)

Reblogged from the-personal-quotes
Reblogged from mindblowingscience
mindblowingscience:

Ethical trap: robot paralysed by choice of who to save

Can a robot learn right from wrong? Attempts to imbue robots, self-driving cars and military machines with a sense of ethics reveal just how hard this is
CAN we teach a robot to be good? Fascinated by the idea, roboticist Alan Winfield of Bristol Robotics Laboratory in the UK built an ethical trap for a robot – and was stunned by the machine’s response.
In an experiment, Winfield and his colleagues programmed a robot to prevent other automatons – acting as proxies for humans – from falling into a hole. This is a simplified version of Isaac Asimov’s fictional First Law of Robotics – a robot must not allow a human being to come to harm.
At first, the robot was successful in its task. As a human proxy moved towards the hole, the robot rushed in to push it out of the path of danger. But when the team added a second human proxy rolling toward the hole at the same time, the robot was forced to choose. Sometimes, it managed to save one human while letting the other perish; a few times it even managed to save both. But in 14 out of 33 trials, the robot wasted so much time fretting over its decision that both humans fell into the hole. The work was presented on 2 September at the Towards Autonomous Robotic Systems meeting in Birmingham, UK.
Winfield describes his robot as an “ethical zombie” that has no choice but to behave as it does. Though it may save others according to a programmed code of conduct, it doesn’t understand the reasoning behind its actions. Winfield admits he once thought it was not possible for a robot to make ethical choices for itself. Today, he says, “my answer is: I have no idea”.
As robots integrate further into our everyday lives, this question will need to be answered. A self-driving car, for example, may one day have to weigh the safety of its passengers against the risk of harming other motorists or pedestrians. It may be very difficult to program robots with rules for such encounters.
But robots designed for military combat may offer the beginning of a solution. Ronald Arkin, a computer scientist at Georgia Institute of Technology in Atlanta, has built a set of algorithms for military robots – dubbed an “ethical governor” – which is meant to help them make smart decisions on the battlefield. He has already tested it in simulated combat, showing that drones with such programming can choose not to shoot, or try to minimise casualties during a battle near an area protected from combat according to the rules of war, like a school or hospital.
Arkin says that designing military robots to act more ethically may be low-hanging fruit, as these rules are well known. “The laws of war have been thought about for thousands of years and are encoded in treaties.” Unlike human fighters, who can be swayed by emotion and break these rules, automatons would not.
"When we’re talking about ethics, all of this is largely about robots that are developed to function in pretty prescribed spaces," says Wendell Wallach, author ofMoral Machines: Teaching robots right from wrong. Still, he says, experiments like Winfield’s hold promise in laying the foundations on which more complex ethical behaviour can be built. “If we can get them to function well in environments when we don’t know exactly all the circumstances they’ll encounter, that’s going to open up vast new applications for their use.”
This article appeared in print under the headline “The robot’s dilemma”

Watch a video of these ‘ethical’ robots in action here

mindblowingscience:

Ethical trap: robot paralysed by choice of who to save

Can a robot learn right from wrong? Attempts to imbue robots, self-driving cars and military machines with a sense of ethics reveal just how hard this is

CAN we teach a robot to be good? Fascinated by the idea, roboticist Alan Winfield of Bristol Robotics Laboratory in the UK built an ethical trap for a robot – and was stunned by the machine’s response.

In an experiment, Winfield and his colleagues programmed a robot to prevent other automatons – acting as proxies for humans – from falling into a hole. This is a simplified version of Isaac Asimov’s fictional First Law of Robotics – a robot must not allow a human being to come to harm.

At first, the robot was successful in its task. As a human proxy moved towards the hole, the robot rushed in to push it out of the path of danger. But when the team added a second human proxy rolling toward the hole at the same time, the robot was forced to choose. Sometimes, it managed to save one human while letting the other perish; a few times it even managed to save both. But in 14 out of 33 trials, the robot wasted so much time fretting over its decision that both humans fell into the hole. The work was presented on 2 September at the Towards Autonomous Robotic Systems meeting in Birmingham, UK.

Winfield describes his robot as an “ethical zombie” that has no choice but to behave as it does. Though it may save others according to a programmed code of conduct, it doesn’t understand the reasoning behind its actions. Winfield admits he once thought it was not possible for a robot to make ethical choices for itself. Today, he says, “my answer is: I have no idea”.

As robots integrate further into our everyday lives, this question will need to be answered. A self-driving car, for example, may one day have to weigh the safety of its passengers against the risk of harming other motorists or pedestrians. It may be very difficult to program robots with rules for such encounters.

But robots designed for military combat may offer the beginning of a solution. Ronald Arkin, a computer scientist at Georgia Institute of Technology in Atlanta, has built a set of algorithms for military robots – dubbed an “ethical governor” – which is meant to help them make smart decisions on the battlefield. He has already tested it in simulated combat, showing that drones with such programming can choose not to shoot, or try to minimise casualties during a battle near an area protected from combat according to the rules of war, like a school or hospital.

Arkin says that designing military robots to act more ethically may be low-hanging fruit, as these rules are well known. “The laws of war have been thought about for thousands of years and are encoded in treaties.” Unlike human fighters, who can be swayed by emotion and break these rules, automatons would not.

"When we’re talking about ethics, all of this is largely about robots that are developed to function in pretty prescribed spaces," says Wendell Wallach, author ofMoral Machines: Teaching robots right from wrong. Still, he says, experiments like Winfield’s hold promise in laying the foundations on which more complex ethical behaviour can be built. “If we can get them to function well in environments when we don’t know exactly all the circumstances they’ll encounter, that’s going to open up vast new applications for their use.”

This article appeared in print under the headline “The robot’s dilemma”

Watch a video of these ‘ethical’ robots in action here

Reblogged from historical-nonfiction
They are ill discoverers that think there is no land, when they can see nothing but sea. Francis Bacon (1561 - 1626) an English philosopher, statesman, scientist, juror, orator, essayist, and author who was both the Lord Chancellor and Attorney General of England. Perhaps his most lasting contribution to the world was the “Baconian method” or more commonly known as the “scientific method” (via historical-nonfiction)

(Source: futilitycloset.com, via cipherface)

Reblogged from gaksdesigns
Reblogged from ultrafacts
ultrafacts:

Source If you want more facts, follow Ultrafacts

ultrafacts:

Source If you want more facts, follow Ultrafacts

(via sixpenceee)

Reblogged from dollpoetry
I crave touch, yet I flinch every time someone is close enough. I have become rather fearful I suppose.  (via dollpoetry)

(via educateddemons)

Reblogged from jonasgrossmann
jonasgrossmann:

mies van der rohe…

jonasgrossmann:

mies van der rohe…

(via superarchitects)

Reblogged from beyondgoodand-evil
Reblogged from bigblueboo