Error Single node step1 operation fail
jun314 opened this issue · 2 comments
jun314 commented
Hello?
my linux system is
CentOS Linux release 7.8.2003 (Core)
my ip address below
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:b1:98:e8 brd ff:ff:ff:ff:ff:ff
inet 192.168.233.133/24 brd 192.168.233.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::1a93:f472:c35a:178c/64 scope link noprefixroute
valid_lft forever preferred_lft forever
so i modified deploy.yml
[root@ecs ECS-CommunityEdition]# cat deploy.yml
# deploy.yml reference implementation v2.8.0
# [Optional]
# By changing the license_accepted boolean value to "true" you are
# declaring your agreement to the terms of the license agreement
# contained in the license.txt file included with this software
# distribution.
licensing:
license_accepted: true
#autonames:
# custom:
# - ecs01
# - ecs02
# - ecs03
# - ecs04
# - ecs05
# - ecs06
# [Required]
# Deployment facts reference
facts:
# [Required]
# Node IP or resolvable hostname from which installations will be launched
# The only supported configuration is to install from the same node as the
# bootstrap.sh script is run.
# NOTE: if the install node is to be migrated into an island environment,
# the hostname or IP address listed here should be the one in the
# island environment.
install_node: 192.168.233.133
# [Required]
# IPs of machines that will be whitelisted in the firewall and allowed
# to access management ports of all nodes. If this is set to the
# wildcard (0.0.0.0/0) then anyone can access management ports.
management_clients:
- 0.0.0.0/0
# [Required]
# These credentials must be the same across all nodes. Ansible uses these credentials to
# gain initial access to each node in the deployment and set up ssh public key authentication.
# If these are not correct, the deployment will fail.
ssh_defaults:
# [Required]
# Username to use when logging in to nodes
ssh_username: admin
# [Required]
# Password to use with SSH login
# *** Set to same value as ssh_username to enable SSH public key authentication ***
ssh_password: ChangeMe
# [Required when enabling SSH public key authentication]
# Password to give to sudo when gaining root access.
ansible_become_pass: ChangeMe
# [Required]
# Select the type of crypto to use when dealing with ssh public key
# authentication. Valid values here are:
# - "rsa" (Default)
# - "ed25519"
ssh_crypto: rsa
# [Required]
# Environment configuration for this deployment.
node_defaults:
dns_domain: local
dns_servers:
- 192.168.233.1
ntp_servers:
- 192.168.233.1
#
# [Optional]
# VFS path to source of randomness
# Defaults to /dev/urandom for speed considerations. If you prefer /dev/random, put that here.
# If you have a /dev/srandom implementation or special entropy hardware, you may use that too
# so long as it implements a /dev/random type device.
entropy_source: /dev/urandom
#
# [Optional]
# Picklist for node names.
# Available options:
# - "moons" (ECS CE default)
# - "cities" (ECS SKU-flavored)
# - "custom" (uncomment and use the top-level autonames block to define these)
# autonaming: custom
#
# [Optional]
# If your ECS comes with differing default credentials, you can specify those here
# ecs_root_user: root
# ecs_root_pass: ChangeMe
# [Optional]
# Storage pool defaults. Configure to your liking.
# All block devices that will be consumed by ECS on ALL nodes must be listed under the
# ecs_block_devices option. This can be overridden by the storage pool configuration.
# At least ONE (1) block device is REQUIRED for a successful install. More is better.
storage_pool_defaults:
is_cold_storage_enabled: false
is_protected: false
description: Default storage pool description
ecs_block_devices:
- /dev/sdb
# [Required]
# Storage pool layout. You MUST have at least ONE (1) storage pool for a successful install.
storage_pools:
- name: sp1
members:
- 192.168.233.133
options:
is_protected: false
is_cold_storage_enabled: false
description: My First SP
ecs_block_devices:
- /dev/sdb
# [Optional]
# VDC defaults. Configure to your liking.
virtual_data_center_defaults:
description: Default virtual data center description
# [Required]
# Virtual data center layout. You MUST have at least ONE (1) VDC for a successful install.
# Multi-VDC deployments are not yet implemented
virtual_data_centers:
- name: vdc1
members:
- sp1
options:
description: My First VDC
# [Optional]
# Replication group defaults. Configure to your liking.
replication_group_defaults:
description: Default replication group description
enable_rebalancing: true
allow_all_namespaces: true
is_full_rep: false
# [Optional, required for namespaces]
# Replication group layout. You MUST have at least ONE (1) RG to provision namespaces.
replication_groups:
- name: rg1
members:
- vdc1
options:
description: My First RG
enable_rebalancing: true
allow_all_namespaces: true
is_full_rep: false
# [Optional]
# Management User defaults
management_user_defaults:
is_system_admin: false
is_system_monitor: false
# [Optional]
# Management Users
management_users:
- username: admin1
password: ChangeMe
options:
is_system_admin: true
- username: monitor1
password: ChangeMe
options:
is_system_monitor: true
# [Optional]
# Namespace defaults
namespace_defaults:
is_stale_allowed: false
is_compliance_enabled: false
# [Optional]
# Namespace layout
namespaces:
- name: ns1
replication_group: rg1
administrators:
- root
options:
is_stale_allowed: false
is_compliance_enabled: false
# [Optional]
# Object User defaults
object_user_defaults:
# Comma-separated list of Swift authorization groups
swift_groups_list:
- users
# Lifetime of S3 secret key in minutes
s3_expiry_time: 2592000
# [Optional]
# Object Users
object_users:
- username: object_admin1
namespace: ns1
options:
swift_password: ChangeMe
swift_groups_list:
- admin
- users
s3_secret_key: ChangeMeChangeMeChangeMeChangeMeChangeMe
s3_expiry_time: 2592000
- username: object_user1
namespace: ns1
options:
swift_password: ChangeMe
s3_secret_key: ChangeMeChangeMeChangeMeChangeMeChangeMe
# [Optional]
# Bucket defaults
bucket_defaults:
namespace: ns1
replication_group: rg1
head_type: s3
filesystem_enabled: False
stale_allowed: False
encryption_enabled: False
owner: object_admin1
# [Optional]
# Bucket layout (optional)
buckets:
- name: bucket1
options:
namespace: ns1
replication_group: rg1
owner: object_admin1
head_type: s3
filesystem_enabled: False
stale_allowed: False
encryption_enabled: False
and i still having the issue
[root@ecs ECS-CommunityEdition]# step1
PLAY [Common | Ping data nodes before doing anything else] ******************************************************************************************************************************************************************************************************************************************************************
TASK [ping] *****************************************************************************************************************************************************************************************************************************************************************************************************************
fatal: [192.168.233.133]: UNREACHABLE! => {"changed": false, "msg": "Authentication failure.", "unreachable": true}
PLAY RECAP ******************************************************************************************************************************************************************************************************************************************************************************************************************
192.168.233.133 : ok=0 changed=0 unreachable=1 failed=0
Playbook run took 0 days, 0 hours, 0 minutes, 2 seconds
Operation failed.
[root@ecs ECS-CommunityEdition]#
any idea?
thank you
padthaitofuhot commented
changed post formatting
padthaitofuhot commented
Review your deploy.yml
file and ensure the ssh_username:
and ssh_password:
fields match your VM.