Category:Author missing userpage
From Stampy's Wiki
Pages in category "Author missing userpage"
The following 115 pages are in this category, out of 115 total.
C
- CarlFeynman's Answer to Dismythed & JWA's question on The Orthogonality Thesis
- Casejp's Answer to Should I engage in political or collective action like signing petitions or sending letters to politicians?
- Casejp's Answer to What if we put the AI in a box and have a second, more powerful, AI with the goal of preventing the first one from escaping?
- ChaosAlpha's Answer to Toby Buckley's question on Mesa-Optimizers
- Chlorokin's Answer to Could emulated minds do AI alignment research?
- Chlorokin's Answer to What are "coherence theorems" and what do they tell us about AI?
- Chlorokin's Answer to What if we put the AI in a box and have a second, more powerful, AI with the goal of preventing the first one from escaping?
- Chlorokin's Answer to What is "Do What I Mean"?
- Chlorokin's Answer to What is a "pivotal act"?
- Chlorokin's Answer to Will superintelligence make a large part of humanity unemployable?
- Command Master's Answer to M A's question on Real Inner Misalignment
- Command Master's Answer to Seeker.87's question on Real Inner Misalignment
D
E
L
- Answer to What are "human values"?
- Linnea's Answer to What are OpenAI Codex and GitHub Copilot?
- Linnea's Answer to What are the ethical challenges related to whole brain emulation?
- Answer to What does Elon Musk think about AI safety?
- Answer to What is "evidential decision theory"?
- Answer to What is "functional decision theory"?
- Answer to What is "hedonium"?
- Answer to What is a "quantilizer"?
- Answer to What is an "agent"?
- Answer to What is an "s-risk"?
- Answer to What is causal decision theory?
- Answer to What is GPT-3?
- Answer to What is meant by "AI takeoff"?
- Answer to What is the "long reflection"?
- Answer to What is the "orthogonality thesis"?
- Answer to Will an aligned superintelligence care about animals other than humans?
M
- Answer to I want to help out AI alignment without necessarily making major life changes. What are some simple things I can do to contribute?
- Answer to Can we get AGI by scaling up architectures similar to current ones, or are we missing key insights?
- Murphant's Answer to Could I contribute by offering coaching to alignment researchers? If so, how would I go about this?
- Murphant's Answer to Could we tell the AI to do what's morally right?
- Murphant's Answer to Do AIs suffer?
- Murphant's Answer to How can I contribute in the area of community building?
- Murphant's Answer to How likely is it that governments will play a significant role? What role would be desirable, if any?
- Murphant's Answer to How much resources did the processes of biological evolution use to evolve intelligent creatures?
- Murphant's Answer to Might an aligned superintelligence force people to have better lives and change more quickly than they want?
- Murphant's Answer to What are plausible candidates for "pivotal acts"?
- Murphant's Answer to What are some important examples of specialised terminology in AI alignment?
- Murphant's Answer to What are the "win conditions"/problems that need to be solved?
- Murphant's Answer to What is "metaphilosophy" and how does it relate to AI safety?
- Answer to What safety problems are associated with whole brain emulation?
- Murphant's Answer to What's especially worrisome about autonomous weapons?
N
P
Q
- QueenDaisy's Answer to Are any major politicians concerned about this?
- QueenDaisy's Answer to Might an aligned superintelligence force people to "upload" themselves, so as to more efficiently use the matter of their bodies?
- QueenDaisy's Answer to What could a superintelligent AI do, and what would be physically impossible even for it?
- Answer to Can people contribute to alignment by using proof assistants to generate formal proofs?
- Quintin Pope's Answer to Will superintelligence make a large part of humanity unemployable?
- QZ's Answer to Where can I find mentorship and advice for becoming a researcher?
R
- Redshift's Answer to In "aligning AI with human values", which humans' values are we talking about?
- Answer to How can we interpret what all the neurons mean?
- RoseMcClelland's Answer to How do you figure out model performance scales?
- RoseMcClelland's Answer to How does MIRI communicate their view on alignment?
- RoseMcClelland's Answer to How is Beth Barnes evaluating LM power seeking?
- Answer to How is OpenAI planning to solve the full alignment problem?
- RoseMcClelland's Answer to How is the Alignment Research Center (ARC) trying to solve Eliciting Latent Knowledge (ELK)?
- Answer to How might Shard Theory help with alignment?
- RoseMcClelland's Answer to How would we align an AGI whose learning algorithms / cognition look like human brains?
- RoseMcClelland's Answer to How would you explain the theory of Infra-Bayesianism?
- RoseMcClelland's Answer to What are Encultured working on?
- Answer to What are Scott Garrabrant and Abram Demski working on?
- RoseMcClelland's Answer to What does Evan Hubinger think of Deception + Inner Alignment?
- RoseMcClelland's Answer to What does MIRI think about technical alignment?
- RoseMcClelland's Answer to What does Ought aim to do?
- RoseMcClelland's Answer to What does the scheme Externalized Reasoning Oversight involve?
- RoseMcClelland's Answer to What is Aligned AI / Stuart Armstrong working on?
- RoseMcClelland's Answer to What is an adversarial oversight scheme?
- Answer to What is Anthropic's approach to LLM alignment?
- Answer to What is Conjecture's epistemology research agenda?
- Answer to What is Conjecture's Scalable LLM Interpretability research adgenda?
- RoseMcClelland's Answer to What is Conjecture, and what is their team working on?
- RoseMcClelland's Answer to What is David Krueger working on?
- RoseMcClelland's Answer to What is Dylan Hadfield-Menell's thesis on?
- RoseMcClelland's Answer to What is FAR's theory of change?
- RoseMcClelland's Answer to What is Future of Humanity Instititute working on?
- RoseMcClelland's Answer to What is John Wentworth's plan?
- RoseMcClelland's Answer to What is Refine?
- RoseMcClelland's Answer to What is the Center for Human Compatible AI (CHAI)?
- Answer to What is the Center on Long-Term Risk (CLR) focused on?
- Answer to What is the DeepMind's safety team working on?
- Answer to What is the goal of Simulacra Theory?
- RoseMcClelland's Answer to What is the purpose of the Visible Thoughts Project?
- RoseMcClelland's Answer to What is Truthful AI's approach to improve society?
- RoseMcClelland's Answer to What language models are Anthropic working on?
- RoseMcClelland's Answer to What other organizations are working on technical AI alignment?
- RoseMcClelland's Answer to What projects are CAIS working on?
- RoseMcClelland's Answer to What projects are Redwood Research working on?
- RoseMcClelland's Answer to What work is Redwood doing on LLM interpretability?
- RoseMcClelland's Answer to Who is Jacob Steinhardt and what is he working on?
- RoseMcClelland's Answer to Who is Sam Bowman and what is he working on?
T
- TapuZuko's Answer to Isn't the real concern autonomous weapons?
- TapuZuko's Answer to Might an aligned superintelligence immediately kill everyone and then go on to create a "hedonium shockwave"?
- Tinytitan's Answer to Could we get significant biological intelligence enhancements long before AGI?
- TJ6K's Answer to What beneficial things would an aligned superintelligence be able to do?
- Answer to Why would we only get one chance to align a superintelligence?
Y
- Yaakov's Answer to What are the different versions of decision theory?
- Yaakov's Answer to Which organizations are working on AI alignment?
- Yevgeniy Andreyevich's Answer to afla light's question on 10 Reasons to Ignore AI Safety
- Yevgeniy Andreyevich's Answer to Lapis Salamander's question on Intro to AI Safety
- Yevgeniy Andreyevich's Answer to Rich Traube's question on WNJ: Think of AGI like a Corporation?