- Регистрация
- 04.03.23
- Сообщения
- 372
- Реакции
- 62
Смотрите видео ниже, чтобы узнать, как установить наш сайт в качестве веб-приложения на домашнем экране.
Примечание: Эта возможность может быть недоступна в некоторых браузерах.
root@dc2:/home/super# /opt/rbta/aldpro/client/bin/aldpro-client-installer -c avavt.local -u admin -p ****** -d dc2 -i -f
systemctl mask aldpro-client-service-discovery.service
Created symlink /etc/systemd/system/aldpro-client-service-discovery.service → /dev/null.
/usr/bin/astra-freeipa-client -d "avavt.local" -u "admin" -p "******" -y --par "--hostname=dc2.avavt.local --force-join"
Discovery was successful!
Client hostname: dc2.avavt.local
Realm: AVAVT.LOCAL
DNS Domain: avavt.local
IPA Server: dc1.avavt.local
BaseDN: dc=avavt,dc=local
Synchronizing time
Configuration of chrony was changed by installer.
Attempting to sync time with chronyc.
Process chronyc waitsync failed to sync time!
Unable to sync time with chrony server, assuming the time is in sync. Please check that 123 UDP port is opened, and any time server is on network.
In unattended mode without a One Time Password (OTP) or without --ca-cert-file
You must specify --force to retrieve the CA cert using HTTP
Cannot obtain CA certificate
HTTP certificate download requires --force
Installation failed. Rolling back changes.
Disabling client Kerberos and LDAP configurations
Restoring client configuration files
nscd daemon is not installed, skip configuration
nslcd daemon is not installed, skip configuration
Some installation state for ntp has not been restored, see /var/lib/ipa/sysrestore/sysrestore.state
Some installation state has not been restored.
This may cause re-installation to fail.
It should be safe to remove /var/lib/ipa-client/sysrestore.state but it may
mean your system hasn't been restored to its pre-installation state.
Client uninstall complete.
The ipa-client-install command failed. See /var/log/ipaclient-install.log for more information
Traceback (most recent call last):
File "main.py", line 41, in <module>
File "console_app/install.py", line 17, in add_to_domain
console_app.install.BashException: Команда /usr/bin/astra-freeipa-client -d "avavt.local" -u "admin" -p "******" -y --par "--hostname=dc2.avavt.local --force-join" завершилась с ошибкой.
[15584] Failed to execute script main
Спасибо большое Владимиру, что потратил на меня изрядное количество времени. Разобрались. Проблема была в совпадении ID виртуальных машин, из-за того что я клонировал чистую виртуальную машину в proxmox. Если вдруг кто тоже столкнется.Добрый день. Разворачиваю контроллер строго по этой инструкции и все прекрасно. Но когда разворачиваю клиента по той-же инструкции(хочу сделать второй контроллер). То неизменно возникает ошибка при присоединении клиента к домену.
Аналогичная ошибка. Как ееп побороть?Рушил установить ald pro 2.1.0 на astra 1.7.4
aldpro-server-install .......
заканчивается с ошибкой:
[INFO ] Loading fresh modules for state activity
[INFO ] Executing command dig in directory '/root'
[INFO ] Executing command dig in directory '/root'
[CRITICAL] Rendering SLS 'base:aldpro.dc.states.ipa' failed: found character that cannot start any token
[INFO ] Executing command dig in directory '/root'
[INFO ] Executing command dig in directory '/root'
[INFO ] Executing command dig in directory '/root'
[INFO ] Executing command dig in directory '/root'
local:
Data failed to compile:
----------
Rendering SLS 'base:aldpro.dc.states.ipa' failed: found character that cannot start any token
Traceback (most recent call last):
File "/usr/sbin/aldpro-server-install", line 110, in <module>
run()
File "/usr/sbin/aldpro-server-install", line 93, in run
run_command_with_show_stdout("salt-call state.apply aldpro.dc.install pillar='{}' queue=True".format(pillar_string))
File "/usr/sbin/aldpro-server-install", line 41, in run_command_with_show_stdout
raise Exception('Произошла ошибка. Пожалуйста, попробуйте выполнить команду повторно. \n')
Exception: Произошла ошибка. Пожалуйста, попробуйте выполнить команду повторно.
В чем может быть проблема?
Спасибо Владимиру! Вопрос решен. Ресурсов мало было. Увеличил на ВМ до 2ЦП и 4Гбайт. Стало все в порядке.Аналогичная ошибка. Как ееп побороть?
"loop_|-wait_for_salt_minion_on_spb-sdc-999.vgrus.local_|-saltutil.runner_|-until_no_eval":
"duration": 6465.605,
"name": "saltutil.runner",
"jid": null,
"comment": "Call provided the expected results in 1 attempts",
"result": true
"loop_|-wait_for_salt_minion_on_spb-sdc-900.vgrus.local_|-saltutil.runner_|-until_no_eval":
"duration": 6388.65,
"name": "saltutil.runner",
"jid": null,
"comment": "Call provided the expected results in 1 attempts",
"result": true
"salt_|-target_machine_sync_all_|-saltutil.sync_all_|-function":
"duration": 1271.808,
"name": "saltutil.sync_all",
"jid": "20231024155741312255",
"comment": "Function ran successfully. Function saltutil.sync_all ran on spb-sdc-900.vgrus.local.",
"result": true
"salt_|-mask_discovery_master_|-mask_discovery_master_|-state":
"duration": 1679.917,
"name": "mask_discovery_master",
"jid": "20231024155742681898",
"comment": "States ran successfully. Updating spb-sdc-999.vgrus.local.",
"result": true
"salt_|-mask_discovery_target_|-mask_discovery_target_|-state":
"duration": 1087.236,
"name": "mask_discovery_target",
"jid": "20231024155744244628",
"comment": "States ran successfully. Updating spb-sdc-900.vgrus.local.",
"result": true
"salt_|-run_ssl_orchestrate_|-state.orchestrate_|-runner":
"duration": 6881.018,
"name": "state.orchestrate",
"jid": "20231024155745704882",
"comment": "Runner function 'state.orchestrate' executed.",
"result": true
"salt_|-install_mp_|-install_mp_|-state":
"duration": 266275.44,
"name": "install_mp",
"jid": "20231024155752231759",
"comment": "States ran successfully. Updating spb-sdc-900.vgrus.local.",
"result": true
"salt_|-stop_salt_master_target_|-stop_salt_master_target_|-state":
"duration": 7349.651,
"name": "stop_salt_master_target",
"jid": "20231024160218549729",
"comment": "States ran successfully. Updating spb-sdc-900.vgrus.local.",
"result": true
"salt_|-apply_deploy_state_|-apply_deploy_state_|-state":
"duration": 866997.13,
"name": "apply_deploy_state",
"jid": null,
"comment": "Run failed on minions: spb-sdc-900.vgrus.local",
"result": false
"salt_|-unmask_discovery_target_|-unmask_discovery_target_|-state":
"duration": 15286.564,
"name": "unmask_discovery_target",
"jid": null,
"comment": "Run failed on minions: spb-sdc-900.vgrus.local",
"result": false
"salt_|-start_salt_master_target_|-start_salt_master_target_|-state":
"duration": 4531.393,
"name": "start_salt_master_target",
"jid": "20231024161708421909",
"comment": "States ran successfully. Updating spb-sdc-900.vgrus.local.",
"result": true
"salt_|-unmask_discovery_master_|-unmask_discovery_master_|-state":
"duration": 2057.758,
"name": "unmask_discovery_master",
"jid": "20231024161712681406",
"comment": "States ran successfully. Updating spb-sdc-999.vgrus.local.",
"result": true
root@spb-sdc-999:~# systemctl start celery
root@spb-sdc-999:~# systemctl status celery
● celery.service - Celery Service
Loaded: loaded (/lib/systemd/system/celery.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2023-10-25 17:34:36 MSK; 1s ago
Process: 36380 ExecStartPre=/bin/mkdir -p ${CELERYD_STATE_DIR} (code=exited, status=0/SUCCESS)
Process: 36381 ExecStartPre=/bin/chown -R ${CELERYD_USER}:${CELERYD_GROUP} ${CELERYD_STATE_DIR} (code=exited, status=0/SUCCESS)
Process: 36382 ExecStart=/bin/sh -c ${CELERY_BIN} multi start ${CELERYD_NODES} -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} --
Main PID: 36392 (python3)
Tasks: 1 (limit: 9347)
Memory: 47.6M
CPU: 1.473s
CGroup: /system.slice/celery.service
└─36392 /usr/bin/python3 -m celery -A project worker --pidfile=/var/run/celery/worker1.pid --logfile=/var/log/aldpro/celery/worker1.log --loglevel=INFO --purge -n worker1@spb
окт 25 17:34:34 spb-sdc-999.vgrus.local systemd[1]: Starting Celery Service...
окт 25 17:34:36 spb-sdc-999.vgrus.local sh[36382]: celery multi v5.1.2 (sun-harmonics)
окт 25 17:34:36 spb-sdc-999.vgrus.local sh[36382]: > Starting nodes...
окт 25 17:34:36 spb-sdc-999.vgrus.local sh[36382]: > worker1@spb-sdc-999.vgrus.local: OK
окт 25 17:34:36 spb-sdc-999.vgrus.local systemd[1]: Started Celery Service.
root@spb-sdc-999:~# systemctl status celery
● celery.service - Celery Service
Loaded: loaded (/lib/systemd/system/celery.service; enabled; vendor preset: enabled)
Active: inactive (dead) since Wed 2023-10-25 17:34:43 MSK; 1min 6s ago
Process: 36380 ExecStartPre=/bin/mkdir -p ${CELERYD_STATE_DIR} (code=exited, status=0/SUCCESS)
Process: 36381 ExecStartPre=/bin/chown -R ${CELERYD_USER}:${CELERYD_GROUP} ${CELERYD_STATE_DIR} (code=exited, status=0/SUCCESS)
Process: 36382 ExecStart=/bin/sh -c ${CELERY_BIN} multi start ${CELERYD_NODES} -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} --
Process: 36454 ExecStop=/bin/sh -c ${CELERY_BIN} multi stopwait ${CELERYD_NODES} --pidfile=${CELERYD_PID_FILE} (code=exited, status=0/SUCCESS)
Main PID: 36392 (code=exited, status=0/SUCCESS)
CPU: 3.285s
окт 25 17:34:34 spb-sdc-999.vgrus.local systemd[1]: Starting Celery Service...
окт 25 17:34:36 spb-sdc-999.vgrus.local sh[36382]: celery multi v5.1.2 (sun-harmonics)
окт 25 17:34:36 spb-sdc-999.vgrus.local sh[36382]: > Starting nodes...
окт 25 17:34:36 spb-sdc-999.vgrus.local sh[36382]: > worker1@spb-sdc-999.vgrus.local: OK
окт 25 17:34:36 spb-sdc-999.vgrus.local systemd[1]: Started Celery Service.
окт 25 17:34:43 spb-sdc-999.vgrus.local sh[36454]: celery multi v5.1.2 (sun-harmonics)
окт 25 17:34:43 spb-sdc-999.vgrus.local sh[36454]: > worker1@spb-sdc-999.vgrus.local: DOWN
окт 25 17:34:43 spb-sdc-999.vgrus.local systemd[1]: celery.service: Succeeded.
окт 25 17:34:43 spb-sdc-999.vgrus.local systemd[1]: celery.service: Consumed 3.285s CPU time.
-canclient[56817]: pika.exceptions.ProbableAccessDeniedError: ConnectionClosedByBroker: (541) "INTERNAL_ERROR - access to vhost 'adcan' refused for user 'adcan': vhost 'adcan' is down"
Nov 10 19:19:48 dc0 aldpro-canclient[56817]: [ERROR ] Connection closed while tuning the connection indicating a probable permission error when accessing a virtual host
Nov 10 19:19:48 dc0 aldpro-canclient[56817]: [ERROR ] AMQPConnector - reporting failure: AMQPConnectorAMQPHandshakeError: ProbableAccessDeniedError: Client was disconnected at a connection stage indicating a probable denial of access to the specified virtual host: ('ConnectionClosedByBroker: (541) "INTERNAL_ERROR - access to vhost \'adcan\' refused for user \'adcan\': vhost \'adcan\' is down"',)
Nov 10 19:19:48 dc0 aldpro-canclient[56817]: [ERROR ] Connection closed while tuning the connection indicating a probable permission error when accessing a virtual host
Nov 10 19:19:48 dc0 aldpro-canclient[56817]: [ERROR ] AMQPConnector - reporting failure: AMQPConnectorAMQPHandshakeError: ProbableAccessDeniedError: Client was disconnected at a connection stage indicating a probable denial of access to the specified virtual host: ('ConnectionClosedByBroker: (541) "INTERNAL_ERROR - access to vhost \'adcan\' refused for user \'adcan\': vhost \'adcan\' is down"',)
Nov 10 19:19:48 dc0 aldpro-canclient[56817]: [ERROR ] AMQP connection workflow failed: AMQPConnectionWorkflowFailed: 2 exceptions in all; last exception - AMQPConnectorAMQPHandshakeError: ProbableAccessDeniedError: Client was disconnected at a connection stage indicating a probable denial of access to the specified virtual host: ('ConnectionClosedByBroker: (541) "INTERNAL_ERROR - access to vhost \'adcan\' refused for user \'adcan\': vhost \'adcan\' is down"',); first exception - AMQPConnectorAMQPHandshakeError: ProbableAccessDeniedError: Client was disconnected at a connection stage indicating a probable denial of access to the specified virtual host: ('ConnectionClosedByBroker: (541) "INTERNAL_ERROR - access to vhost \'adcan\' refused for user \'adcan\': vhost \'adcan\' is down"',).
Nov 10 19:19:48 dc0 aldpro-canclient[56817]: [ERROR ] AMQPConnectionWorkflow - reporting failure: AMQPConnectionWorkflowFailed: 2 exceptions in all; last exception - AMQPConnectorAMQPHandshakeError: ProbableAccessDeniedError: Client was disconnected at a connection stage indicating a probable denial of access to the specified virtual host: ('ConnectionClosedByBroker: (541) "INTERNAL_ERROR - access to vhost \'adcan\' refused for user \'adcan\': vhost \'adcan\' is down"',); first exception - AMQPConnectorAMQPHandshakeError: ProbableAccessDeniedError: Client was disconnected at a connection stage indicating a probable denial of access to the specified virtual host: ('ConnectionClosedByBroker: (541) "INTERNAL_ERROR - access to vhost \'adcan\' refused for user \'adcan\': vhost \'adcan\' is down"',)
Nov 10 19:19:48 dc0 aldpro-canclient[56817]: [ERROR ] Connection workflow failed: AMQPConnectionWorkflowFailed: 2 exceptions in all; last exception - AMQPConnectorAMQPHandshakeError: ProbableAccessDeniedError: Client was disconnected at a connection stage indicating a probable denial of access to the specified virtual host: ('ConnectionClosedByBroker: (541) "INTERNAL_ERROR - access to vhost \'adcan\' refused for user \'adcan\': vhost \'adcan\' is down"',); first exception - AMQPConnectorAMQPHandshakeError: ProbableAccessDeniedError: Client was disconnected at a connection stage indicating a probable denial of access to the specified virtual host: ('ConnectionClosedByBroker: (541) "INTERNAL_ERROR - access to vhost \'adcan\' refused for user \'adcan\': vhost \'adcan\' is down"',)
Nov 10 19:19:48 dc0 aldpro-canclient[56817]: [ERROR ] Error in _create_connection().
Nov 10 19:19:48 dc0 aldpro-canclient[56817]: Traceback (most recent call last):
Nov 10 19:19:48 dc0 aldpro-canclient[56817]: File "/usr/lib/python3/dist-packages/pika/adapters/blocking_connection.py", line 451, in _create_connection
Nov 10 19:19:48 dc0 aldpro-canclient[56817]: raise self._reap_last_connection_workflow_error(error)
Nov 10 19:19:48 dc0 aldpro-canclient[56817]: pika.exceptions.ProbableAccessDeniedError: ConnectionClosedByBroker: (541) "INTERNAL_ERROR - access to vhost 'adcan' refused for user 'adcan': vhost 'adcan' is down"
Nov 10 19:19:48 dc0 aldpro-canclient[56817]: [ERROR ] An un-handled exception was caught by Salt's global exception handler:
Nov 10 19:19:48 dc0 aldpro-canclient[56817]: ProbableAccessDeniedError: ConnectionClosedByBroker: (541) "INTERNAL_ERROR - access to vhost 'adcan' refused for user 'adcan': vhost 'adcan' is down"
Nov 10 19:19:48 dc0 aldpro-canclient[56817]: Traceback (most recent call last):
Nov 10 19:19:48 dc0 aldpro-canclient[56817]: File "/usr/sbin/aldpro-canclient", line 124, in <module>
Nov 10 19:19:48 dc0 aldpro-canclient[56817]: main()
Nov 10 19:19:48 dc0 aldpro-canclient[56817]: File "/usr/sbin/aldpro-canclient", line 25, in main
Nov 10 19:19:48 dc0 aldpro-canclient[56817]: amqp_enable_heartbeat=True,
Nov 10 19:19:48 dc0 aldpro-canclient[56817]: File "/usr/lib/python3/dist-packages/ad_salt_can/models/client.py", line 32, in __init__
Nov 10 19:19:48 dc0 aldpro-canclient[56817]: amqp_enable_heartbeat=amqp_enable_heartbeat
Nov 10 19:19:48 dc0 aldpro-canclient[56817]: File "/usr/lib/python3/dist-packages/ad_salt_can/models/__init__.py", line 59, in __init__
Nov 10 19:19:48 dc0 aldpro-canclient[56817]: self.amqp = DefaultAMQPTransport(enable_heartbeat=amqp_enable_heartbeat)
Nov 10 19:19:48 dc0 aldpro-canclient[56817]: File "/usr/lib/python3/dist-packages/ad_salt_can/transport/rabbitmq.py", line 70, in __init__
Nov 10 19:19:48 dc0 aldpro-canclient[56817]: self.connector = self.connector(url, enable_heartbeat)
Nov 10 19:19:48 dc0 aldpro-canclient[56817]: File "/usr/lib/python3/dist-packages/ad_salt_can/transport/rabbitmq.py", line 23, in __init__
Nov 10 19:19:48 dc0 aldpro-canclient[56817]: self.open_connection()
Nov 10 19:19:48 dc0 aldpro-canclient[56817]: File "/usr/lib/python3/dist-packages/ad_salt_can/transport/rabbitmq.py", line 50, in open_connection
Nov 10 19:19:48 dc0 aldpro-canclient[56817]: self._connection = pika.BlockingConnection(self.parameters)
Nov 10 19:19:48 dc0 aldpro-canclient[56817]: File "/usr/lib/python3/dist-packages/pika/adapters/blocking_connection.py", line 360, in __init__
Nov 10 19:19:48 dc0 aldpro-canclient[56817]: self._impl = self._create_connection(parameters, _impl_class)
Nov 10 19:19:48 dc0 aldpro-canclient[56817]: File "/usr/lib/python3/dist-packages/pika/adapters/blocking_connection.py", line 451, in _create_connection
Nov 10 19:19:48 dc0 aldpro-canclient[56817]: raise self._reap_last_connection_workflow_error(error)
Nov 10 19:19:48 dc0 aldpro-canclient[56817]: pika.exceptions.ProbableAccessDeniedError: ConnectionClosedByBroker: (541) "INTERNAL_ERROR - access to vhost 'adcan' refused for user 'adcan': vhost 'adcan' is down"
Nov 10 19:19:48 dc0 systemd[1]: aldpro-canclient.service: Main process exited, code=exited, status=1/FAILURE
Nov 10 19:19:48 dc0 systemd[1]: aldpro-canclient.service: Failed with result 'exit-code'.
Nov 10 19:19:48 dc0 systemd[1]: aldpro-canclient.service: Consumed 1.863s CPU time.
asdasd, Добрый день! Можете Можете установить КД и активировать одну политику для пользователей. Как заполнять можно посмотреть если нажать знак вопро
Мы используем основные cookies для обеспечения работы этого сайта, а также дополнительные cookies для обеспечения максимального удобства пользователя.
Посмотрите дополнительную информацию и настройте свои предпочтения