Create tags

From Stampy's Wiki

These 77 tags are used but have not had their tag pages created.

All uncreated tags

Create: Academia - Used by 3: Are there any courses on technical AI safety topics?, Dpaleka's Answer to Are there any courses on technical AI safety topics?, How likely is it that AGI will first be developed by a large established organization, rather than a small startup, an academic group or a government?
Create: Acausal trade - Used by 1: How might "acausal trade" affect alignment?
Create: Actors - Used by 4: Does it make sense to focus on scenarios where change is rapid and due to a single actor, or slower and dependent on getting agreements between several relevant actors?, How likely is it that AGI will first be developed by a large established organization, rather than a small startup, an academic group or a government?, What are some of the leading AI capabilities organizations?, Which country will AGI likely be created by, and does this matter?
Create: Agent foundations - Used by 3: If intelligence is the ability to predict observations, should I quantify this in terms of a deterministic or probabilistic prediction function?, Plex's Answer to What is "agent foundations"?, What is "agent foundations"?
Create: Agi fire alarm - Used by 4: How would I know if AGI were imminent?, Should we expect "warning shots" before an unrecoverable catastrophe?, What would a world shortly before AGI look like?, Would "warning shots" make a difference and, if so, would they be helpful or harmful?
Create: Algorithmic progress - Used by 1: How software- and/or hardware-bottlenecked are we on AGI?
Create: Alignment proposals - Used by 2: Dropinthesea's Answer to What is "Do What I Mean"?, What alignment strategies are scalably safe and competitive?
Create: Anthropic - Used by 4: RoseMcClelland's Answer to How can we interpret what all the neurons mean?, RoseMcClelland's Answer to What is Anthropic's approach to LLM alignment?, RoseMcClelland's Answer to How do you figure out model performance scales?, What is Anthropic's approach to LLM alignment?
Create: Arc - Used by 1: How is the Alignment Research Center (ARC) trying to solve Eliciting Latent Knowledge (ELK)?
Create: Biological cognitive enhancement - Used by 2: Could we get significant biological intelligence enhancements long before AGI?, Tinytitan's Answer to Could we get significant biological intelligence enhancements long before AGI?
Create: Center for human compatible ai (chai) - Used by 1: RoseMcClelland's Answer to What is the Center for Human Compatible AI (CHAI)?
Create: Civilization - Used by 2: Answer to What harm could a single superintelligence do when it took so many humans to build civilization?, What harm could a single superintelligence do when it took so many humans to build civilization?
Create: Communication - Used by 4: How does MIRI communicate their view on alignment?, Plex's Answer to What’s a good AI alignment elevator pitch?, What links are especially valuable to share on social media or other contexts?, What’s a good AI alignment elevator pitch?
Create: Community - Used by 2: How can I contribute in the area of community building?, Murphant's Answer to How can I contribute in the area of community building?
Create: Compute - Used by 2: Jrmyp's Answer to What are "scaling laws" and how are they relevant to safety?, How software- and/or hardware-bottlenecked are we on AGI?
Create: Content - Used by 2: Jrmyp's Answer to What are some good podcasts about AI alignment?, What are some good podcasts about AI alignment?
Create: Cooperative inverse reinforcement learning (cirl) - Used by 2: Dropinthesea's Answer to What is "Do What I Mean"?, RoseMcClelland's Answer to What is the Center for Human Compatible AI (CHAI)?
Create: Counterfactuals - Used by 1: RoseMcClelland's Answer to What are Scott Garrabrant and Abram Demski working on?
Create: Creativity - Used by 2: Aprillion's Answer to Can AI be creative?, Can AI be creative?
Create: Cybersecurity - Used by 1: How might AGI interface with cybersecurity?
Create: Debate - Used by 2: Plex's Answer to What is AI Safety via Debate?, What is AI Safety via Debate?
Create: Deep learning - Used by 1: Nico Hill2's Answer to Would AI alignment be hard with deep learning?
Create: Deepmind - Used by 4: Answer to Superintelligence sounds like science fiction. Do people think about this in the real world?, Plex's Answer to What are some of the most impressive recent advances in AI capabilities?, What does the scheme Externalized Reasoning Oversight involve?, What is the DeepMind's safety team working on?
Create: Differential technological development - Used by 1: How possible (and how desirable) is it to change which path humanity follows to get to AGI?
Create: Do what i mean - Used by 3: Chlorokin's Answer to What is "Do What I Mean"?, How might a real-world AI system that receives orders in natural language and does what you mean look?, What is "Do What I Mean"?
Create: Education - Used by 2: Plex's Answer to What training programs and courses are available for AGI safety?, What training programs and courses are available for AGI safety?
Create: Elk - Used by 1: How is the Alignment Research Center (ARC) trying to solve Eliciting Latent Knowledge (ELK)?
Create: Emotions - Used by 1: Could AI have basic emotions?
Create: Encultured - Used by 1: What are Encultured working on?
Create: Epistomology - Used by 1: RoseMcClelland's Answer to What is Conjecture's epistemology research agenda?
Create: Ethics - Used by 7: Nico Hill2's Answer to What are the ethical challenges related to whole brain emulation?, Murphant's Answer to Could we tell the AI to do what's morally right?, Murphant's Answer to Do AIs suffer?, Murphant's Answer to Might an aligned superintelligence force people to have better lives and change more quickly than they want?, Severin's Answer to What are the leading theories in moral philosophy and which of them might be technically the easiest to encode into an AI?, What are the ethical challenges related to whole brain emulation?, What are the leading theories in moral philosophy and which of them might be technically the easiest to encode into an AI?
Create: Eutopia - Used by 4: Gelisam's Answer to What would a good future with AGI look like?, Might trying to build a hedonium-maximizing AI be easier and more likely to work than trying for eudaimonia?, What beneficial things would an aligned superintelligence be able to do?, What would a good future with AGI look like?
Create: Evolution - Used by 1: Murphant's Answer to How much resources did the processes of biological evolution use to evolve intelligent creatures?
Create: Far - Used by 1: RoseMcClelland's Answer to What is FAR's theory of change?
Create: Friendly ai - Used by 4: Answer to Can we add "friendliness" to any artificial intelligence design?, Answer to What is "friendly AI"?, Can we add "friendliness" to any artificial intelligence design?, What is "friendly AI"?
Create: Goals - Used by 2: Answer to What can we expect the motivations of a superintelligent machine to be?, What can we expect the motivations of a superintelligent machine to be?
Create: Governance - Used by 4: Does it make sense to focus on scenarios where change is rapid and due to a single actor, or slower and dependent on getting agreements between several relevant actors?, How does the field of AI Safety want to accomplish its goal of preventing existential risk?, Plex's Answer to How does the field of AI Safety want to accomplish its goal of preventing existential risk?, Which organizations are working on AI policy?
Create: Government - Used by 10: Are any major politicians concerned about this?, Are there any AI alignment projects which governments could usefully put a very large amount of resources into?, Does it make sense to focus on scenarios where change is rapid and due to a single actor, or slower and dependent on getting agreements between several relevant actors?, How does the field of AI Safety want to accomplish its goal of preventing existential risk?, How likely is it that AGI will first be developed by a large established organization, rather than a small startup, an academic group or a government?, How likely is it that governments will play a significant role? What role would be desirable, if any?, How tractable is it to get governments to play a good role (rather than a bad role) and/or to get them to play a role at all (rather than no role)?, Murphant's Answer to How likely is it that governments will play a significant role? What role would be desirable, if any?, Plex's Answer to How does the field of AI Safety want to accomplish its goal of preventing existential risk?, QueenDaisy's Answer to Are any major politicians concerned about this?
Create: Hedonium - Used by 4: Might an aligned superintelligence immediately kill everyone and then go on to create a "hedonium shockwave"?, Might trying to build a hedonium-maximizing AI be easier and more likely to work than trying for eudaimonia?, TapuZuko's Answer to Might an aligned superintelligence immediately kill everyone and then go on to create a "hedonium shockwave"?, What is "hedonium"?
Create: Implementation - Used by 1: Plex's Answer to Why can’t we just…
Create: Information security - Used by 2: How important is research closure and OPSEC for capabilities-synergistic ideas?, How might we reduce the diffusion of dangerous AI technology to insufficiently careful actors?
Create: Infra-bayesianism - Used by 1: How would you explain the theory of Infra-Bayesianism?
Create: Infrastructure - Used by 1: How does the current global microchip supply chain work, and who has political power over it?
Create: Inside view - Used by 2: Helenator's Answer to How do I form my own views about AI safety?, How do I form my own views about AI safety?
Create: Institutions - Used by 4: How does the field of AI Safety want to accomplish its goal of preventing existential risk?, How likely are AI organizations to respond appropriately to the risks of their creations?, How successfully have institutions managed risks from novel technology in the past?, Plex's Answer to How does the field of AI Safety want to accomplish its goal of preventing existential risk?
Create: Intelligence amplification - Used by 1: Are there promising ways to make AI alignment researchers smarter?
Create: Investmenting - Used by 1: How should I change my financial investments in response to the possibility of transformative AI?
Create: Megaprojects - Used by 1: Are there any AI alignment projects which governments could usefully put a very large amount of resources into?
Create: Mentorship - Used by 2: QZ's Answer to Where can I find mentorship and advice for becoming a researcher?, Where can I find mentorship and advice for becoming a researcher?
Create: Metaethics - Used by 1: How is metaethics relevant to AI alignment?
Create: Metaphilosophy - Used by 2: Murphant's Answer to What is "metaphilosophy" and how does it relate to AI safety?, What is "metaphilosophy" and how does it relate to AI safety?
Create: Metaphors - Used by 2: Plex's Answer to Is there a danger in anthropomorphizing AI’s and trying to understand them in human terms?, Is there a danger in anthropomorphizing AI’s and trying to understand them in human terms?
Create: Multipolar - Used by 1: Does it make sense to focus on scenarios where change is rapid and due to a single actor, or slower and dependent on getting agreements between several relevant actors?
Create: Neural networks - Used by 2: Are there any courses on technical AI safety topics?, Dpaleka's Answer to Are there any courses on technical AI safety topics?
Create: Objections - Used by 3: Plex's Answer to What are some objections to the importance of AI alignment?, What are some common objections to the need for AI alignment, and brief responses to these?, What are some objections to the importance of AI alignment?
Create: Openai - Used by 3: RoseMcClelland's Answer to How is OpenAI planning to solve the full alignment problem?, Linnea's Answer to What does Elon Musk think about AI safety?, How is OpenAI planning to solve the full alignment problem?
Create: Outreach - Used by 2: Plex's Answer to How can I convince others and present the arguments well?, What convinced people working on AI alignment that it was worth spending their time on this cause?
Create: Paradigm - Used by 2: If AGI comes from a new paradigm, how likely is it to arise late in the paradigm when it is already deployed at scale, versus early on when only a few people are exploring the idea?, To what extent are there meaningfully different paths to AGI, versus just one path?
Create: Pattern-matching - Used by 2: Plex's Answer to How might non-agentic GPT-style AI cause an "intelligence explosion" or otherwise contribute to existential risk?, How might non-agentic GPT-style AI cause an "intelligence explosion" or otherwise contribute to existential risk?
Create: Person-affecting view - Used by 2: ElloMelon's Answer to If I only care about helping people alive today, does AI safety still matter?, If I only care about helping people alive today, does AI safety still matter?
Create: Personal action - Used by 2: Dropinthesea's Answer to What actions can I take in under five minutes to contribute to the cause of AI safety?, How should I personally prepare for when transformative AI arrives?
Create: Philosophy - Used by 1: What are some problems in philosophy that are related to AI safety?
Create: Physics - Used by 2: QueenDaisy's Answer to What could a superintelligent AI do, and what would be physically impossible even for it?, What could a superintelligent AI do, and what would be physically impossible even for it?
Create: Plex's answer to what are some good resources on ai alignment? - Used by 1: Plex's Answer to What training programs and courses are available for AGI safety?
Create: Politics - Used by 1: Casejp's Answer to Should I engage in political or collective action like signing petitions or sending letters to politicians?
Create: Power seeking - Used by 1: How is Beth Barnes evaluating LM power seeking?
Create: Preferences - Used by 2: Answer to Can't we just tell an AI to do what we want?, Can't we just tell an AI to do what we want?
Create: Productivity - Used by 5: Could I contribute by offering coaching to alignment researchers? If so, how would I go about this?, How can I be a more productive student/researcher?, How can I support alignment researchers to be more productive?, Murphant's Answer to Could I contribute by offering coaching to alignment researchers? If so, how would I go about this?, Severin's Answer to How can I be a more productive student/researcher?
Create: Redwood research - Used by 1: RoseMcClelland's Answer to What work is Redwood doing on LLM interpretability?
Create: Regulation - Used by 4: Are there any plausibly workable proposals for regulating or banning dangerous AI research?, How does the field of AI Safety want to accomplish its goal of preventing existential risk?, Is there something useful we can ask governments to do for AI alignment?, Plex's Answer to How does the field of AI Safety want to accomplish its goal of preventing existential risk?
Create: Research assistants - Used by 3: Are there promising ways to make AI alignment researchers smarter?, Chlorokin's Answer to Could emulated minds do AI alignment research?, Could emulated minds do AI alignment research?
Create: Robustness - Used by 2: Are there any courses on technical AI safety topics?, Dpaleka's Answer to Are there any courses on technical AI safety topics?
Create: Security mindset - Used by 3: Answer to Couldn’t we keep the AI in a box and never give it the ability to manipulate the external world?, Plex's Answer to Why can’t we just…, Couldn’t we keep the AI in a box and never give it the ability to manipulate the external world?
Create: Shard theory - Used by 1: How might Shard Theory help with alignment?
Create: Simulation hypothesis - Used by 2: Is the question of whether we're living in a simulation relevant to AI safety? If so, how?, TapuZuko's Answer to Is the question of whether we're living in a simulation relevant to AI safety? If so, how?
Create: Singleton - Used by 2: Linnea's Answer to What does Elon Musk think about AI safety?, Does it make sense to focus on scenarios where change is rapid and due to a single actor, or slower and dependent on getting agreements between several relevant actors?
Create: Solutions - Used by 2: Answer to What would a good solution to AI alignment look like?, What would a good solution to AI alignment look like?
Create: Stable win condition - Used by 2: Plex's Answer to If we solve alignment, are we sure of a good future?, If we solve alignment, are we sure of a good future?
Create: Success models - Used by 2: Plex's Answer to If we solve alignment, are we sure of a good future?, If we solve alignment, are we sure of a good future?
Create: Tech companies - Used by 1: How likely is it that AGI will first be developed by a large established organization, rather than a small startup, an academic group or a government?
Create: Technological unemployment - Used by 3: Chlorokin's Answer to Will superintelligence make a large part of humanity unemployable?, Quintin Pope's Answer to Will superintelligence make a large part of humanity unemployable?, Will superintelligence make a large part of humanity unemployable?
Create: Test - Used by 1: Do AIs suffer?
Create: Transhumanism - Used by 2: Beamnode's Answer to Does the importance of AI risk depend on caring about transhumanist utopias?, Does the importance of AI risk depend on caring about transhumanist utopias?
Create: Truthful ai - Used by 1: RoseMcClelland's Answer to What is Truthful AI's approach to improve society?
Create: Values - Used by 3: Luca's Answer to What is a "value handshake"?, Linnea's Answer to Will an aligned superintelligence care about animals other than humans?, What assets need to be protected by/from the AI? Are "human values" sufficient for it?