Create tags

From Stampy's Wiki

These 55 tags are used but have not had their tag pages created.

All uncreated tags

Create: Academia - Used by 1: How likely is it that AGI is first developed by a large established org, versus a small startup-y org, versus an academic group, versus a government?
Create: Acausal trade - Used by 1: How might acausal trade affect alignment?
Create: Actors - Used by 4: Does it make sense to focus on scenarios where change is rapid and from a single actor or slower and depends on getting agreements between several relevant actors?, How likely is it that AGI is first developed by a large established org, versus a small startup-y org, versus an academic group, versus a government?, What are some of the leading AI capabilities organizations?, Which country will AGI be created by, and does this matter?
Create: Agency - Used by 4: Linnea's Answer to What is an agent?, How might things go wrong with AI even without an agentic superintelligence?, What is an agent?, What's meant by calling an AI "agenty" or "agentlike"?
Create: Agent foundations - Used by 1: What is agent foundations?
Create: Agi fire alarm - Used by 4: How would I know whether AGI is imminent?, Should we expect warning shots before an unrecoverable catastrophe?, What does the world shortly before AGI look like?, Would warning shots make a difference, and if so, would they be helpful or harmful?
Create: Algorithmic progress - Used by 1: How software- and/or hardware-bottlenecked are we on AGI?
Create: Alignment proposals - Used by 1: What alignment strategies are scalably safe and competitive?
Create: Alignment targets - Used by 1: What should the first AGI systems be aligned to do?
Create: Arms race - Used by 1: What military applications of AI will likely exist?
Create: Awareness - Used by 1: Why is the future of AI suddenly in the news? What has changed?
Create: Biological cognitive enhancement - Used by 1: Could we get significant biological intelligence enhancements long before AGI?
Create: Civilization - Used by 2: Answer to What harm could a single superintelligence do, when it took so many humans to build civilization?, What harm could a single superintelligence do, when it took so many humans to build civilization?
Create: Communication - Used by 1: What’s a good AI alignment elevator pitch?
Create: Community - Used by 1: I want to contribute through community building, how?
Create: Compute - Used by 1: How software- and/or hardware-bottlenecked are we on AGI?
Create: Content - Used by 2: Jrmyp's Answer to What are some good podcasts about AI alignment?, What are some good podcasts about AI alignment?
Create: Creativity - Used by 2: Aprillion's Answer to Can AI be creative?, Can AI be creative?
Create: Cybersecurity - Used by 1: How might AGI interface with cybersecurity?
Create: Deep learning - Used by 1: Nico Hill2's Answer to Would AI alignment be hard with deep learning?
Create: Deepmind - Used by 1: Answer to Superintelligence sounds a lot like science fiction. Do people think about this in the real world?
Create: Differential technological development - Used by 1: How possible (and how desirable) is it to change which path humanity follows to get to AGI?
Create: Do what i mean - Used by 2: How might a real world AI system that recieves orders in natural language and does what you mean look like?, What is 'Do What I Mean'?
Create: Emotions - Used by 1: Could AI have basic emotions?
Create: Ethics - Used by 3: Nico Hill2's Answer to What are the ethical challenges related to whole brain emulation?, What are the ethical challenges related to whole brain emulation?, What are the leading moral theories in philosophy and which might be technically easiest to programm into an AI?
Create: Eutopia - Used by 3: Might trying to build a hedonium maximizing AI be easier and more likely to work than trying for eudaimonia?, What beneficial things would an aligned AGI be able to do?, What would a good future with AGI look like?
Create: Friendly ai - Used by 4: Answer to Can we add friendliness to any artificial intelligence design?, Answer to What is Friendly AI?, Can we add friendliness to any artificial intelligence design?, What is Friendly AI?
Create: Goals - Used by 2: Answer to What can we expect the motivations of a superintelligent machine to be?, What can we expect the motivations of a superintelligent machine to be?
Create: Governance - Used by 3: Does it make sense to focus on scenarios where change is rapid and from a single actor or slower and depends on getting agreements between several relevant actors?, How does the field of AI Safety want to accomplish its goal of preventing existential risk? By establishing best practises, institutions & processses, awareness, regulation, certification, etc?, What AI policy organisations exist?
Create: Government - Used by 7: Are any major politicians concerned about this?, Are there AI alignment projects which governments could usefully put a very large amount of resources into?, Does it make sense to focus on scenarios where change is rapid and from a single actor or slower and depends on getting agreements between several relevant actors?, How does the field of AI Safety want to accomplish its goal of preventing existential risk? By establishing best practises, institutions & processses, awareness, regulation, certification, etc?, How likely is it that AGI is first developed by a large established org, versus a small startup-y org, versus an academic group, versus a government?, How likely is it that governments play a role at all? What role would be desirable, if any?, How tractable is it to try to get governments to play a good role (rather than a bad role), and/or to try to get governments to play a role at all (rather than no role)?
Create: Hedonium - Used by 3: Might an aligned superintelligence immediately kill everyone and then go on to create a hedonium shockwave?, Might trying to build a hedonium maximizing AI be easier and more likely to work than trying for eudaimonia?, What is hedonium?
Create: Implementation - Used by 1: Plex's Answer to Why can’t we just…
Create: Information security - Used by 2: How important is research closure and opsec for capabilities-synergistic ideas?, How might we reduce the diffusion of dangerous AI technology to insufficiently careful actors?
Create: Infrastructure - Used by 1: How does the global chip supply chain look like and who has political power over the chip supply?
Create: Inside view - Used by 2: Helenator's Answer to How do I form my own views about AI safety?, How do I form my own views about AI safety?
Create: Institutions - Used by 3: How does the field of AI Safety want to accomplish its goal of preventing existential risk? By establishing best practises, institutions & processses, awareness, regulation, certification, etc?, How likely are AI orgs to respond appropriately to the risks of their creations?, How successfully have institutions managed risks from novel technology in the past?
Create: Intelligence amplification - Used by 1: Are there promising ways to make AI alignment researchers smarter?
Create: Investmenting - Used by 1: How should I change my financial investments in response to the possibility of transformative AI?
Create: Megaprojects - Used by 1: Are there AI alignment projects which governments could usefully put a very large amount of resources into?
Create: Mentorship - Used by 1: Where can I find mentorship and advice for becoming a researcher?
Create: Metaethics - Used by 1: How is metaethics relevant to AI alignment?
Create: Metaphilosophy - Used by 1: What is metaphilosophy and how does it relate to AI safety?
Create: Metaphors - Used by 2: Plex's Answer to Is there a danger in anthropomorphising AI’s and trying to understand them in human terms?, Is there a danger in anthropomorphising AI’s and trying to understand them in human terms?
Create: Multipolar - Used by 1: Does it make sense to focus on scenarios where change is rapid and from a single actor or slower and depends on getting agreements between several relevant actors?
Create: Objections - Used by 2: What are common objections to AI alignment and brief responses?, What are some of the most carefully thought out objections to AI alignment?
Create: Open problems - Used by 1: What are some open research questions?
Create: Openai - Used by 1: Linnea's Answer to What does Elon Musk think about AI safety?
Create: Outreach - Used by 1: What convinced people working on AI alignment that it was worth spending their time on this cause?
Create: Paradigm - Used by 2: If AGI comes from a new paradigm, how likely is it to arise late in the paradigm when it is already deployed at scale versus early when a few people are exploring the idea?, To what extent are there meaningfully different paths to AGI, versus just one path?
Create: Pattern-matching - Used by 2: Plex's Answer to How might non-agentic GPT-style AI cause an intelligence explosion or otherwise contribute to existential risk?, How might non-agentic GPT-style AI cause an intelligence explosion or otherwise contribute to existential risk?
Create: Person-affecting view - Used by 2: ElloMelon's Answer to If I only care about helping people alive today, does AI safety still matter?, If I only care about helping people alive today, does AI safety still matter?
Create: Personal action - Used by 1: How should I personally prepare for when transformative AI arrives?
Create: Philosophy - Used by 1: What are some problems in philosophy that are related to AI safety?
Create: Physics - Used by 1: What things could a superintelligent AI do and what things are physically impossible even for it?
Create: Preferences - Used by 2: Answer to Can we just tell an AI to do what we want?, Can we just tell an AI to do what we want?
Create: Productivity - Used by 3: Could I contribute by offering coaching to alignment researchers? If so, how would I go about this?, How can I be a more productive student/researcher?, How can I support alignment researchers to be more productive?
Create: Regulation - Used by 3: Are there any plausibly workable proposals for regulating or banning dangerous AI research?, How does the field of AI Safety want to accomplish its goal of preventing existential risk? By establishing best practises, institutions & processses, awareness, regulation, certification, etc?, Is there something useful we can ask governments to do for AI alignment?
Create: Research assistants - Used by 2: Are there promising ways to make AI alignment researchers smarter?, Could emulated minds do AI alignment research?
Create: Security mindset - Used by 3: Answer to Couldn’t we keep the AI in a box and never give it the ability to manipulate the external world?, Plex's Answer to Why can’t we just…, Couldn’t we keep the AI in a box and never give it the ability to manipulate the external world?
Create: Simulation hypothesis - Used by 1: Is the quesiton whether we're living in a simulation relevant to AI safety? If yes, how?
Create: Singleton - Used by 2: Linnea's Answer to What does Elon Musk think about AI safety?, Does it make sense to focus on scenarios where change is rapid and from a single actor or slower and depends on getting agreements between several relevant actors?
Create: Solutions - Used by 2: Answer to What would an actually good solution to AI alignment look like?, What would an actually good solution to AI alignment look like?
Create: Stable win condition - Used by 2: Plex's Answer to If we solve alignment, are we sure of a good future?, If we solve alignment, are we sure of a good future?
Create: Study - Used by 1: What university should I study at if I want to best prepare for working on AI alignment?
Create: Success models - Used by 2: Plex's Answer to If we solve alignment, are we sure of a good future?, If we solve alignment, are we sure of a good future?
Create: Tech companies - Used by 1: How likely is it that AGI is first developed by a large established org, versus a small startup-y org, versus an academic group, versus a government?
Create: Technological unemployment - Used by 1: Will superintelligence make a large part of humanity unemployable?
Create: Transhumanism - Used by 1: Does the importance of AI risk depend on caring about transhumanist utopias?
Create: Values - Used by 3: Luca's Answer to What is a value handshake?, Linnea's Answer to Will an aligned superintelligence care about animals other than humans?, What assets need to be protected by/from the AI? Are "human values" sufficient for it?