Quantcast
Channel: VMware Communities : All Content - All Communities
Viewing all 195209 articles
Browse latest View live

windows 10 guest customization does not complete

$
0
0

No error messages, I get the typical "Started customization of VM..." but the guest never reboots or goes through a sysprep operation nor does vsphere confirm that customization was successful.  I looked in the guestcust.log and could find nothing to suggest an error or that any part of the customization failed.  Hoping I'm not alone here, this is a plain vanilla install of Windows 10, nothing fancy added or configured.

 

vSphere Client version 5.5.0 build 3237766

vCenter Server version 5.5.0 build 3142196

VMware ESXi, 5.5.0 build 3568722

VMware tools are current and running

guestcust.log attached.


NFS 4.1 Disconnects - NFS 4.1 Advanced System Settings

$
0
0

We are experiencing periodic disconnects to a NFS v4.1 data store from our vSphere v6 host. It occurs during a period of high IO and an associated VMware event indicating the hypervisor has lost connection to the NFS server. After a minute or two the NFS data store will re-mount.

 

In NFS v3 I see people recommending modifying NFS.MaxQueueDepth https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2016122 but this only seems to apply to NFS v3.

 

For NFS v4.1 the settings in host > configuration > advanced system settings - start with "NFS41." instead of "NFS." So I assume any changes made to NFS.MaxQueDepth would not apply to our NFS v4.1 data stores.

 

Does anyone have any experience with modifying NFS41 advanced system settings? I'd like to find a setting comparable to MaxQueueDepth for NFS v3 that will work with NFS v4.1. Right now since NFS v4.1 is relatively new I'm not finding any documentation on these NFS41 advanced settings.

 

I have a suspicion that NFS41.MaxRead and NFS41.MaxWrite may be similar to NFS.MaxQueueDepth, but I was wondering if anyone could confirm?

 

Thank you.

Enabling proxy arp to support LXC containers

$
0
0

I have a cluster of CentOS 7.2 servers running as VMs under ESXi 6.0. These all reside on the same subnet and we have no problem communicating between the different servers. In addition, each of these servers run a number of LXC containers, also based on CentOS 7.2. The hosts can communicate with their containers without issues and the containers on a given server can communicate with each other. However, containers hosted on two different servers cannot communicate with each other or with other servers.

 

When we install this same cluster software on real hardware we do not have any communication issues between the containers. We've also installed our cluster software on KVM based VMs instead of ESXi and everything works fine in this environment. On the other hand. we decided to tackle an AWS based installation as well and ran into the same problem as we hit with ESXi where the containers running on different hosts (AWS instances) could not communicate. We solved the problem in AWS using proxy arp. Specifically, we set the following CentOS options on each of our servers:

 

echo 1 > /proc/sys/net/ipv4/conf/br0/forwarding

echo 1 > /proc/sys/net/ipv4/conf/br0/proxy_arp_pvlan

echo 1 > /proc/sys/net/ipv4/conf/br0/proxy_arp

echo 0 > /proc/sys/net/ipv4/conf/all/send_redirects

echo 0 > /proc/sys/net/ipv4/conf/br0/send_redirects

 

With these configured, the containers running under the various instances are able to communicate with each other without problems regardless of who is hosting them. Unfortunately, we tried the same set of options on our ESXi based cluster without success. The only solution we've found is to enable promiscuous mode on the vSwitch we've defined in our vSphere environment. However, this is not an ideal solution since this means all traffic, regardless of where it originates, is allowed through to hosts and containers. This is not ideal from a security perspective.

 

So the question is can this issue be solved with a proxy arp solution like we used with AWS, or is there another approach entirely? As I mentioned, our KVM based cluster works without needing proxy arp or promiscuous mode, so perhaps there is something in ESXi that would work similar to KVM. Any help in this matter would be very much appreciated.

 

Peter

Storing VMs on SD Cards

$
0
0

Hello,

I am considering buying VMWare workstation for my laptop. However, I don't have enough space on the Harddrive to store the VMs, so I was thinking about getting a big SD card or USB drive.

 

The virtual machines would be strictly for training / testing purposes. We would never put a production-level VM on an SD card.

 

What is the feasibility of doing this? Does anyone have any experience with it?

 

Is it technically considered supported by VMWare? If not, does it work anyway?

 

I am not looking for any promises, just trying to get a feel for it. If it doesnt' work at all, I will have wasted the money on workstation and the money on a large SD Card or drive.

 

I'm primarily looking at workstation, but might also look at player pro.

 

Thanks,

Ben

Meaning of usedspaceGB

$
0
0

I'm running this basic command but not getting what I need:

 

 

 

PowerCLI C:\> get-vm vmlax002 | select name, provisionedspacegb, usedspacegb

 

what i get is 100GB for both provisionedspacegb and usedspaceGB

 

However, inside the OS, Windows actually sees that although it is a 100GB virtual disk, there is still 50GB of free space.

 

The virtual machine is think provisioned. 

 

How can I get just the used space as it is seen from the oS perspective?

 

Thanks!

Enrollment server & certificates?

$
0
0

Hi all,

 

I am just going through the ICM Horizon 7.0 course but it is On Demand and I don´t have access to anyone to ask questions.

 

My questions are as follows:

 

1) Is it 100% required to install an enrollment server in a new environment (I ask for example if the environment will not need access from outside the LAN and the users don´t mind entering their AD credentials each time)?

2) Is it 100% required to use certificates in the View 7.0 environment (in the course it doesn´t seem like it should be essential however there are some slides that say it is required)?

3) What is the simplest (minimum required) Horizon environment that is supported (for example just a Connection Server and desktops)?

4) IF the installation in point (3) above was ok (just Connection srv & desktops), what would the client be missing out on e.g. having to click "Proceed, I understand the risk" everytime they open Internet Explorer to open Conn srv, having to manually enter their AD credentials each time they log in to desktops, etc?

 

I ask these questions as I may have a deployment where the client wants the absolute minimum installed and the course hasn´t made it clear what that minimum could be.

I want to know to be able to advise over certain restrictions they may be placing on their environment and risks etc.

 

Thanks in advance

Regards

Mark

Agent Unreachable Status for VDI Clients

$
0
0

Hey guys,

 

We have an automated pool of 150 desktops which get refreshed on logoff OR after being disconnected for >60 minutes. I've noticed that we have been gradually getting more and more "Agent Unreachable" status's for our VDI desktops over a period of 4 - 6 hours until I go in, remove the VM's from disk and let provisioning re-provision the VM's to reset them. Doing a desktop reset does not fix the issue. I have made sure that the VM's are receiving IP addresses from DHCP and they are. We are not running out of addresses since these 150 desktops have 250 IP addresses in the DHCP scope with 15 minute lease times.

 

We are running VMware View 5.2. If any more information is needed, just ask.

Remove Individual Linked Clone View PowerCLI

$
0
0

Is there a way in which to force delete a linked clone vm from a pool?  I currently have a couple of linked clones that I can't remove through console and I wondering if I can force remove a clone through PowerCLI.  I can't find a cmdlet to just remove a single clone, only the entire pool.

 

Ideally I would like to write a script that finds null vm's or vm's throwing the message "Could not find object of type..." and have them be deleted.

 

Any help would be greatly appreciated,

Thanks


ESXI en IBM 3650 M1 (7979)

$
0
0

Hola antes que nada gracias por su tiempo, tengo la siguiente consulta capaz alguno me puede ayudar.

Tengo servidores IBM X3650 (7979) con raid controller 8k y si quisiera saber como puedo manejar los HDD y Raids que tengo con un esxi 6.0 que instale, si bien me muestra el estado de cada uno de los discos, no tengo forma de manejar el raid ni que me de un alerta.
Supongamos que quiero hacer un defunc de un disco para hacer el cambio porque se daño, no tengo la posibilidad de hacerlo mediante el software de IBM.

 

 

Esto es todo lo que me muestra referido a los HDD y nada mas,¿No hay alguna forma de manejar el raid o el estado de los discos en ESXI?

 

esxi1.png

Intente usar PRTG pero tampoco no tienen forma de ponerme el estado de los discos para que me envie un alerta, pero lo que mas me preocupa es que no puedo manejar los raid que tenga armado en ese servidor.

 

Espero haber sido claro.

 

Saludos,

VMWare Tools fails to install on Czech Windows 2000 & XP

$
0
0

Recently I've been reviving two installations, Windows 2000 SP4 and Windows XP SP3 (both czech versions), when I found out I am unable to install the latest VMWare tools.

 

Both systems were running the tools perfectly fine on older version of VMWare (probably 7 or 8) but updating to the latest tools on VMWare Workstation 12.1 failed. When the installation is almost done, the program starts to "roll back actions" and then says that the installation ended up prematurely. Same on 2000 and XP, but on Win2k there is an additional error regarding Think printer driver (but when I click continue on it, the installation continues "fine").

 

I have tried to reinstall Win2k multiple times, but I am always ending up in the same problem (I tried to install both before and after applying all updates), except for the English localization where the tools installed properly. Now I don't really understand why this don't work on Czech locale but works fine on the English one, since it has always been working on Czech too for many years.

 

Interestingly, the last versions that I found installable is 8.6.16-3054402 and 8.4.6 build385536 (quite an old old).

 

I've come across this KB article that appear to describe my problem, but according to it, it should have been fixed long time ago:

VMware KB:    VMware Tools installation in non-US-English 32-bit Windows rolls back and fails with MSI error status 160…

 

This is the log I found in the temp folder on Windows 2000 after trying to upgrade from the version named above:
http://pastebin.com/riWPa54G

 

Also, it is extremely disappointing that a product of such quality lacks 3D acceleration on Windows 95 and Windows 98 systems.

 

The bug is present on latest release: 12.1.1

 

Any help would be greatly appreciated.:)
Thanks.

  

when Vmware Fusion Pro 9 will come for sale... ?

$
0
0

when Vmware Fusion Pro 9 will come for sale... ?

32 vmdk files in windows explorer

$
0
0

Since new installation of VMware Workstation 12 we have 32 numbered .vmdk files on our disk.

And the Windows7 in the VM is working very, very slow.

 

How can one reïntegrate all these 32 files (most of them about 5 GB) to one .vmdk file.

 

Thank you.

How to clean the yellow exclamation on the ESX host after fixing a problem?

$
0
0

I think this is a simple question. After we fixed a NIC connectivity issue, the yellow exclamation sign on the ESX host doesn't disappear. What should I do to clean it? Thanks.

Why does VMRC sometimes open as a very small window

$
0
0

Sometimes when I open a new console window VMRC starts a a very small window, much smaller than the resolution of the VM. It also doesn't have any scroll bars. Why does this happen? Is there a way to prevent this behavior.

Need script to add-to-inventory every VM on a datastore

$
0
0

Hi all,

 

I need a script to add-to-inventory every VM on a datastore.

 

Any suggestions?

 

 

Thanks!

 

-Chris


P2V conversion

$
0
0

If a current stand alone server has problems and I convert it to VM using P2V, will the problem follow?

Thanks

remove "inaccessable" datastore from VCenter appliance inventory.

$
0
0

I am trying to find out how to remove two datastores marked “inaccessible” from the VCenter appliance version 6.0update2

 

We have two datacenters, each with it’s own VCenter appliance.  We had a host in one VCenter that had access to two datastores attached over fiber channel.  That host was imported into the second VCenter using the "add host", so it wasn’t properly removed from the first VCenter.

 

The result is that there are two datastores in the first vcenter that have no hosts associated with them.  They’re showing as “inaccessible” and are grayed out.  Most the functions such as “delete” “unmount”, etc. are also grayed out.

 

Is there any easy, safe way to remove them?  I did some searching and all I seemed to find were references to NFS datastores, and/or modifying the database on a Windows based VCenter.  I found articles about removing a datastore from a host, but in this case there are no hosts attached to this datastore. 

 

I'm looking for something like the "remove from inventory" option that exists for hosts

 

Mike O.

vSphere Integrated Containers (VIC) 1.0 むけの ESXi Firewall Rule 設定。

$
0
0

vSphere Integrated Containers (VIC) 1.0 では、デフォルトでは ESXi に登録されていない

ESXi Firewall のルールが必要になります。

ESXi から VCH の Serial Over LAN の通信のため、TCP 2377 番ポートの解放が必要です。

※もしくは、esxcli network firewall set --enabled false で ESXi Firewall を無効にしてしまいます・・・

Firewall Validation Error | VMware vSphere Integrated Containers Engine 0.8 Installation

 

ファイアウォール ルールが設定されていない場合は、

vic-machine で Virtual Container Host (VCH) をデプロイするときに下記のようなメッセージが表示されます。

INFO[2017-01-12T02:51:32+09:00] Firewall status: ENABLED on "/dc01/host/cluster-vsan01/hv-i23.godc.lab"

WARN[2017-01-12T02:51:32+09:00] Firewall configuration on "/dc01/host/cluster-vsan01/hv-i23.godc.lab" may prevent connection on dst 2377/tcp outbound with allowed IPs: [192.168.51.161 192.168.51.239]

 

VCH のデプロイについては、こちらもどうぞ。

vSphere Integrated Containers (VIC) 1.0 をためしてみる。

 

今回は、下記の KB を参考に、ESXi Firewall にルールを追加してみます。

Creating custom firewall rules in VMware ESXi 5.x (2008226) | VMware KB

 

設定するファイアウォール ルール。

  • 発信接続 (outbound)
  • ポート: TCP 2377
  • ルール ID:  vicoutgoing ※これは別の名前でもよい。

 

ファイアウォールルールの追加方法。

ESXi の /etc/vmware/firewall/ ディレクトリ配下の xml ファイルにルールを記載します。

/etc/vmware/firewall/service.xml に追記すればよいですが、今回はあえて

/etc/vmware/firewall/vicoutgoing.xml という別ファイルを作成しました。

 

ESXi 6.0 U2 に設定しています。

[root@hv-i23:~] vmware -vl

VMware ESXi 6.0.0 build-4192238

VMware ESXi 6.0.0 Update 2

 

まだ ファイアウォール ルールは設定されていません。

[root@hv-i23:~] esxcli network firewall ruleset list | grep vic

[root@hv-i23:~]

[root@hv-i23:~] esxcli network firewall ruleset rule list | grep vic

[root@hv-i23:~]

 

vi などのエディタによる編集か、もしくは下記のように、

ファイアウォール ルールを記載した xml ファイルを作成します。

service id は、空いていそうな 300 番を選びました。

cat << EOF > /etc/vmware/firewall/vicoutgoing.xml

<ConfigRoot>

  <service id='0300'>

    <id>vicoutgoing</id>

    <rule id='0000'>

      <direction>outbound</direction>

      <protocol>tcp</protocol>

      <port type='dst'>2377</port>

    </rule>

    <enabled>true</enabled>

    <required>true</required>

  </service>

</ConfigRoot>

EOF

 

xml ファイルが作成されています。

[root@hv-i23:~] cat /etc/vmware/firewall/vicoutgoing.xml

<ConfigRoot>

  <service id='0300'>

    <id>vicoutgoing</id>

    <rule id='0000'>

      <direction>outbound</direction>

      <protocol>tcp</protocol>

      <port type='dst'>2377</port>

    </rule>

    <enabled>true</enabled>

    <required>true</required>

  </service>

</ConfigRoot>

 

ESXi Firewall をリフレッシュすると、ルールが追加されます。

[root@hv-i23:~] esxcli network firewall refresh

[root@hv-i23:~] esxcli network firewall ruleset list | grep vic

vicoutgoing                  true

[root@hv-i23:~] esxcli network firewall ruleset rule list | grep vic

vicoutgoing               Outbound   TCP       Dst              2377      2377

 

vSphere Web Client でも、発信接続のルールが追加されたことが確認できます。

vic-fw-rule-01.png

 

ESXi Firewall ルールの永続化について。

 

上記の方法だと、ESXi の再起動によりルールが消えてしまうので、再起動のたびに登録が必要になります。

そこで私の環境では、やむなく ESXi 起動時に実行される /etc/rc.local.d/local.sh ファイルに

赤字部分 (xml ファイル生成と firewall の reflesh) を記載しています。

[root@hv-i23:~] cat /etc/rc.local.d/local.sh


#!/bin/sh


# local configuration options


# Note: modify at your own risk!  If you do/use anything in this

# script that is not part of a stable API (relying on files to be in

# specific places, specific tools, specific output, etc) there is a

# possibility you will end up with a broken system after patching or

# upgrading.  Changes are not supported unless under direction of

# VMware support.

 

cat << EOF > /etc/vmware/firewall/vicoutgoing.xml

<ConfigRoot>

  <service id='0300'>

    <id>vicoutgoing</id>

    <rule id='0000'>

      <direction>outbound</direction>

      <protocol>tcp</protocol>

      <port type='dst'>2377</port>

    </rule>

    <enabled>true</enabled>

    <required>true</required>

  </service>

</ConfigRoot>

EOF


esxcli network firewall refresh


exit 0

 

以上、VIC の ESXi Firewall ルール設定についての話でした。

vSphere Integrated Containers (VIC) 1.0 をためしてみる。

$
0
0

日本の vExpert による 2016 アドベントカレンダー 11日目のブログ投稿です。

vExperts Advent Calendar 2016 - Adventar

 

VMware vSphere Integrated Containers 1.0 が、とうとう GA になりました。

ドキュメント

VMware vSphere Integrated Containers Documentation

ダウンロード

VMware vSphere Integrated Containers Download

 

さっそく、下記のような構成で

vSphere Integrated Containers Engine (VIC Engine)をインストールしてみました。

VIC 1.0 に含まれる VIC Engine のバージョンは、0.8 です。

VCH Endpoint VM と Container VM には、VMware Photon OS が使用されています。

  • vCenter Server Applience 6.0 U2
  • ESXi 6.0 U2
  • DRS 有効。
  • 分散仮想スイッチ(vDS)を使用。
  • vSAN データストアを使用。
  • VIC Machine は、Windows 10 PC から実行。
  • docker コマンドは、Oracle Linux 7 から実行。

vE-Advent2016-VIC10GA.png

VIC Machine、docker コマンドは Windows / Linux / Mac から実行できます。

ちなみに Linux は Ubuntu でテストされているようですが、

偶然てもとの環境が Oracle Linux だったのでそれを使用しました。

 

1. Virtual Container Hosts (VCH) のデプロイ。

 

MyVMware サイトから、VIC Engine (vic_0.8.0-7315-c8ac999.tar.gz) をダウンロードして、C:\work に展開しました。

その中に含まれる vic-machine ユーティリティで VCH をデプロイします。

今回は、Windows 10 の PC から、VCH をデプロイしてみました。

PS C:\work\vic> dir

 

  ディレクトリ: C:\work\vic

 

Mode                LastWriteTime         Length Name

----                -------------         ------ ----

d-----       2016/12/11     13:46                ui

-a----       2016/12/04      4:30      127401984 appliance.iso

-a----       2016/12/04      4:30       65732608 bootstrap.iso

-a----       2016/12/04      4:29         209570 LICENSE

-a----       2016/12/04      4:29             57 README

-a----       2016/12/04      4:29       35095088 vic-machine-darwin

-a----       2016/12/04      4:29       35557968 vic-machine-linux

-a----       2016/12/04      4:29       35261952 vic-machine-windows.exe

-a----       2016/12/04      4:29       31672144 vic-ui-darwin

-a----       2016/12/04      4:29       31972920 vic-ui-linux

-a----       2016/12/04      4:29       31675392 vic-ui-windows.exe

 

 

Windows 用の vic-machine を使用します。

PS C:\work\vic> .\vic-machine-windows.exe

NAME:

   vic-machine-windows.exe - Create and manage Virtual Container Hosts


USAGE:

   vic-machine-windows.exe [global options] command [command options] [arguments...]


VERSION:

   v0.8.0-7315-c8ac999


COMMANDS:

     create   Deploy VCH

     delete   Delete VCH and associated resources

     ls       List VCHs

     inspect  Inspect VCH

     version  Show VIC version information

     debug    Debug VCH


GLOBAL OPTIONS:

   --help, -h     show help

   --version, -v  print the version

 

それでは、VCH をデプロイします。

今回の vCenter のアドレスは 192.168.1.82 です。

VCH から vCenter の名前解決ができないとエラーになるので、今回は IP アドレスで指定しています。

実行結果から、VCH に付与された IP アドレスがわかります。

 

下記の例では未設定ですが、vic-machine 実行前に、ESXi Firewall ルールを開放しておきます。

vSphere Integrated Containers (VIC) 1.0 むけの ESXi Firewall Rule 設定。

PS C:\work\vic> .\vic-machine-windows.exe create --target 192.168.1.82 --user "administrator@vsphere.local" --password <パスワード> --compute-resource cluster-vsan01 --bridge-network vic-bridge --public-network dvpg-vlan-0000 --image-store vsanDatastore --no-tlsverify --force

[34mINFO[0m[2016-12-11T17:02:29+09:00] ### Installing VCH ####

[33mWARN[0m[2016-12-11T17:02:29+09:00] Using administrative user for VCH operation - use --ops-user to improve security (see -x for advanced help)

[34mINFO[0m[2016-12-11T17:02:29+09:00] Loaded server certificate virtual-container-host\server-cert.pem

[33mWARN[0m[2016-12-11T17:02:29+09:00] Configuring without TLS verify - certificate-based authentication disabled

[34mINFO[0m[2016-12-11T17:02:29+09:00] Validating supplied configuration

[34mINFO[0m[2016-12-11T17:02:29+09:00] vDS configuration OK on "vic-bridge"

[34mINFO[0m[2016-12-11T17:02:29+09:00] Firewall status: ENABLED on "/dc01/host/cluster-vsan01/hv-i21.godc.lab"

[33mWARN[0m[2016-12-11T17:02:29+09:00] Firewall configuration on "/dc01/host/cluster-vsan01/hv-i21.godc.lab" may prevent connection on dst 2377/tcp outbound with allowed IPs: [192.168.51.239 192.168.51.161]

[34mINFO[0m[2016-12-11T17:02:29+09:00] Firewall status: ENABLED on "/dc01/host/cluster-vsan01/hv-i22.godc.lab"

[33mWARN[0m[2016-12-11T17:02:29+09:00] Firewall configuration on "/dc01/host/cluster-vsan01/hv-i22.godc.lab" may prevent connection on dst 2377/tcp outbound with allowed IPs: [192.168.51.161 192.168.51.239]

[34mINFO[0m[2016-12-11T17:02:29+09:00] Firewall status: ENABLED on "/dc01/host/cluster-vsan01/hv-i23.godc.lab"

[33mWARN[0m[2016-12-11T17:02:29+09:00] Firewall configuration on "/dc01/host/cluster-vsan01/hv-i23.godc.lab" may prevent connection on dst 2377/tcp outbound with allowed IPs: [192.168.51.161 192.168.51.239]

[33mWARN[0m[2016-12-11T17:02:29+09:00] Unable to fully verify firewall configuration due to DHCP use on management network

[33mWARN[0m[2016-12-11T17:02:29+09:00] VCH management interface IP assigned by DHCP must be permitted by allowed IP settings

[33mWARN[0m[2016-12-11T17:02:29+09:00] Firewall allowed IP configuration may prevent required connection on hosts:

[33mWARN[0m[2016-12-11T17:02:29+09:00] "/dc01/host/cluster-vsan01/hv-i21.godc.lab"

[33mWARN[0m[2016-12-11T17:02:29+09:00] "/dc01/host/cluster-vsan01/hv-i22.godc.lab"

[33mWARN[0m[2016-12-11T17:02:29+09:00] "/dc01/host/cluster-vsan01/hv-i23.godc.lab"

[34mINFO[0m[2016-12-11T17:02:29+09:00] Firewall must permit dst 2377/tcp outbound to the VCH management interface

[34mINFO[0m[2016-12-11T17:02:30+09:00] License check OK on hosts:

[34mINFO[0m[2016-12-11T17:02:30+09:00] "/dc01/host/cluster-vsan01/hv-i21.godc.lab"

[34mINFO[0m[2016-12-11T17:02:30+09:00] "/dc01/host/cluster-vsan01/hv-i22.godc.lab"

[34mINFO[0m[2016-12-11T17:02:30+09:00] "/dc01/host/cluster-vsan01/hv-i23.godc.lab"

[34mINFO[0m[2016-12-11T17:02:30+09:00] DRS check OK on:

[34mINFO[0m[2016-12-11T17:02:30+09:00] "/dc01/host/cluster-vsan01/Resources"

[34mINFO[0m[2016-12-11T17:02:30+09:00]

[34mINFO[0m[2016-12-11T17:02:30+09:00] Creating virtual app "virtual-container-host"

[34mINFO[0m[2016-12-11T17:02:31+09:00] Creating appliance on target

[34mINFO[0m[2016-12-11T17:02:31+09:00] Network role "client" is sharing NIC with "public"

[34mINFO[0m[2016-12-11T17:02:31+09:00] Network role "management" is sharing NIC with "public"

[34mINFO[0m[2016-12-11T17:02:34+09:00] Uploading images for container

[34mINFO[0m[2016-12-11T17:02:34+09:00] "bootstrap.iso"

[34mINFO[0m[2016-12-11T17:02:34+09:00] "appliance.iso"

[34mINFO[0m[2016-12-11T17:02:46+09:00] Waiting for IP information

[34mINFO[0m[2016-12-11T17:03:02+09:00] Waiting for major appliance components to launch

[34mINFO[0m[2016-12-11T17:03:02+09:00] Checking VCH connectivity with vSphere target

[34mINFO[0m[2016-12-11T17:03:03+09:00] vSphere API Test: https://192.168.1.82 vSphere API target responds as expected

[34mINFO[0m[2016-12-11T17:03:16+09:00] Initialization of appliance successful

[34mINFO[0m[2016-12-11T17:03:16+09:00]

[34mINFO[0m[2016-12-11T17:03:16+09:00] VCH Admin Portal:

[34mINFO[0m[2016-12-11T17:03:16+09:00] https://192.168.1.5:2378

[34mINFO[0m[2016-12-11T17:03:16+09:00]

[34mINFO[0m[2016-12-11T17:03:16+09:00] Published ports can be reached at:

[34mINFO[0m[2016-12-11T17:03:16+09:00] 192.168.1.5

[34mINFO[0m[2016-12-11T17:03:16+09:00]

[34mINFO[0m[2016-12-11T17:03:16+09:00] Docker environment variables:

[34mINFO[0m[2016-12-11T17:03:16+09:00] DOCKER_HOST=192.168.1.5:2376

[34mINFO[0m[2016-12-11T17:03:16+09:00]

[34mINFO[0m[2016-12-11T17:03:16+09:00] Environment saved in virtual-container-host/virtual-container-host.env

[34mINFO[0m[2016-12-11T17:03:16+09:00]

[34mINFO[0m[2016-12-11T17:03:16+09:00] Connect to docker:

[34mINFO[0m[2016-12-11T17:03:16+09:00] docker -H 192.168.1.5:2376 --tls info

[34mINFO[0m[2016-12-11T17:03:16+09:00] Installer completed successfully

PS C:\work\vic>

 

この先の Docker コマンドのアクセスで、デプロイした VCH の IP アドレスをエンドポイントとして指定します。

VCH の IP アドレスは後から確認することもできます。

 

vCenter の thumbprint を指定しないとエラーになるので・・・

PS C:\work\vic> .\vic-machine-windows.exe ls --target 192.168.1.82 --user "administrator@vsphere.local" --password <パスワード> --compute-resource cluster-vsan01

[34mINFO[0m[2016-12-11T17:47:38+09:00] ### Listing VCHs ####

[31mERRO[0m[2016-12-11T17:47:38+09:00] Failed to verify certificate for target=192.168.1.82 (thumbprint=A4:98:53:2F:68:11:01:06:08:48:AD:68:33:95:0D:6F:30:10:4D:D1)

[31mERRO[0m[2016-12-11T17:47:38+09:00] List cannot continue - failed to create validator: x509: certificate signed by unknown authority

[31mERRO[0m[2016-12-11T17:47:38+09:00] --------------------

[31mERRO[0m[2016-12-11T17:47:38+09:00] vic-machine-windows.exe failed: list failed

 

thumbprint を指定して「vic-machine-windows ls」コマンドを実行します。

VCH の仮想アプライアンスが VM (vm-527) としてデプロイされていることが分かります。

PS C:\work\vic> .\vic-machine-windows.exe ls --target 192.168.1.82 --user "administrator@vsphere.local" --password <パスワード> --compute-resource cluster-vsan01 --thumbprint A4:98:53:2F:68:11:01:06:08:48:AD:68:33:95:0D:6F:30:10:4D:D1

[34mINFO[0m[2016-12-11T17:48:00+09:00] ### Listing VCHs ####

[34mINFO[0m[2016-12-11T17:48:00+09:00] Validating target

[34mINFO[0m[2016-12-11T17:48:00+09:00] Validating compute resource

 

ID            PATH                                       NAME                          VERSION

vm-527        /dc01/host/cluster-vsan01/Resources        virtual-container-host        v0.8.0-7315-c8ac999

 

エンドポイント のアドレスも確認できます。

PS C:\work\vic> .\vic-machine-windows.exe inspect --target 192.168.1.82 --user "administrator@vsphere.local" --password <パスワード> --compute-resource cluster-vsan01 --thumbprint A4:98:53:2F:68:11:01:06:08:48:AD:68:33:95:0D:6F:30:10:4D:D1

[34mINFO[0m[2016-12-11T17:50:09+09:00] ### Inspecting VCH ####

[34mINFO[0m[2016-12-11T17:50:09+09:00]

[34mINFO[0m[2016-12-11T17:50:09+09:00] VCH ID: VirtualMachine:vm-527

[34mINFO[0m[2016-12-11T17:50:09+09:00]

[34mINFO[0m[2016-12-11T17:50:09+09:00] Installer version: v0.8.0-7315-c8ac999

[34mINFO[0m[2016-12-11T17:50:09+09:00] VCH version: v0.8.0-7315-c8ac999

[33mWARN[0m[2016-12-11T17:50:10+09:00] Unable to identify address acceptable to host certificate

[34mINFO[0m[2016-12-11T17:50:10+09:00]

[34mINFO[0m[2016-12-11T17:50:10+09:00] VCH Admin Portal:

[34mINFO[0m[2016-12-11T17:50:10+09:00] https://192.168.1.5:2378

[34mINFO[0m[2016-12-11T17:50:10+09:00]

[34mINFO[0m[2016-12-11T17:50:10+09:00] Published ports can be reached at:

[34mINFO[0m[2016-12-11T17:50:10+09:00] 192.168.1.5

[34mINFO[0m[2016-12-11T17:50:10+09:00]

[34mINFO[0m[2016-12-11T17:50:10+09:00] Docker environment variables:

[34mINFO[0m[2016-12-11T17:50:10+09:00] DOCKER_HOST=192.168.1.5:2376

[34mINFO[0m[2016-12-11T17:50:10+09:00]

[34mINFO[0m[2016-12-11T17:50:10+09:00] Connect to docker:

[34mINFO[0m[2016-12-11T17:50:10+09:00] docker -H 192.168.1.5:2376 --tls info

[34mINFO[0m[2016-12-11T17:50:10+09:00] Completed successfully

PS C:\work\vic>

 

VCH は、vApp としてデプロイされます。デフォルトでは、

「virtual-container-host」という名前の vApp に、

「virtual-container-host」という名前(vApp と同名)の VCH Endpoint VM がデプロイされます。

vic-10ga-11.png


VCH Endpoint VM は、ISO ブートになっています。

vic-10ga-12.png

 

2. Docker コンテナの起動。

 

docker コマンドで、コンテナを起動してみます。

今回は、Oracle Linux 7 から VCH Endpoint にアクセスしています。

クライアントとして使用する Docker は、Oracle Linux Public Yum サーバにある RPM でインストールしたものです。

[gowatana@client01 ~]$ cat /etc/oracle-release

Oracle Linux Server release 7.3

[gowatana@client01 ~]$ docker -v

Docker version 1.12.2, build a8c3fe4

 

VCH のエンドポイントを指定して docker コマンドを実行します。

クライアント側とサーバ側とで API バージョンが異なり、エラーになるので・・・

[gowatana@client01 ~]$ docker -H 192.168.1.5:2376 --tls info

Error response from daemon: client is newer than server (client API version: 1.24, server API version: 1.23)

 

環境変数「DOCKER_API_VERSION」を指定すると、docker コマンドが実行できるようになります。

docker info コマンドで、VCH 側の Docker の情報が表示されます。

[gowatana@client01 ~]$ export DOCKER_API_VERSION=1.23

[gowatana@client01 ~]$ docker -H 192.168.1.5:2376 --tls info

Containers: 0

Running: 0

Paused: 0

Stopped: 0

Images: 0

Server Version: v0.8.0-7315-c8ac999

Storage Driver: vSphere Integrated Containers v0.8.0-7315-c8ac999 Backend Engine

VolumeStores:

vSphere Integrated Containers v0.8.0-7315-c8ac999 Backend Engine: RUNNING

VCH mhz limit: 111 Mhz

VCH memory limit: 52.32 GiB

VMware Product: VMware vCenter Server

VMware OS: linux-x64

VMware OS version: 6.0.0

Plugins:

Volume:

Network: bridge

Swarm:

NodeID:

Is Manager: false

Node Address:

Security Options:

Operating System: linux-x64

OSType: linux-x64

Architecture: x86_64

CPUs: 111

Total Memory: 52.32 GiB

Name: virtual-container-host

ID: vSphere Integrated Containers

Docker Root Dir:

Debug Mode (client): false

Debug Mode (server): false

Registry: registry-1.docker.io

[gowatana@client01 ~]$

 

それでは、Nginx のコンテナを起動してみます。

使用するコンテナ イメージは、Docker Hub オフィシャルの Nginx イメージです。

[gowatana@client01 ~]$ docker -H 192.168.1.5:2376 --tls pull nginx

Using default tag: latest

Pulling from library/nginx

386a066cd84a: Pull complete

a3ed95caeb02: Pull complete

386dc9762af9: Pull complete

d685e39ac8a4: Pull complete

Digest: sha256:e56314fa645f9e8004864d3719e55a6f47bdee2c07b9c8b7a7a1125439d23249

Status: Downloaded newer image for library/nginx:latest

[gowatana@client01 ~]$ docker -H 192.168.1.5:2376 --tls images nginx

REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE

nginx               latest              19d21b7e5b14        12 days ago         181.5 MB

 

コンテナを web01 という名前で起動します。

[gowatana@client01 ~]$ docker -H 192.168.1.5:2376 --tls run -d -p 8080:80 --name web01 nginx

3d4a7cca39dd2511aac38f5550ea9d584def35b7e243546770204d6a3b715a20

[gowatana@client01 ~]$ docker -H 192.168.1.5:2376 --tls ps

CONTAINER ID        IMAGE               COMMAND                  CREATED              STATUS              PORTS                      NAMES

3d4a7cca39dd        nginx               "nginx -g daemon off;"   About a minute ago   Up About a minute   192.168.1.5:8080->80/tcp   web01

 

コンテナ VM が作成されました。

vic-10ga-21.png

 

コンテナの UUID が、そのまま VM 名に付与されます。

コンテナ VM には、172.16.0.0 の IP アドレスが自動付与されています。

vic-10ga-22.png

 

コンテナを追加で起動すると・・・

[gowatana@client01 ~]$ docker -H 192.168.1.5:2376 --tls run -d -p 8081:80 --name web02 nginx

9a60d8755ebb952ce4d4272fadb33125de09f8536187e41142f7fbac53555444

[gowatana@client01 ~]$ docker -H 192.168.1.5:2376 --tls ps

CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                      NAMES

9a60d8755ebb        nginx               "nginx -g daemon off;"   11 minutes ago      Up 10 minutes       192.168.1.5:8081->80/tcp   web02

3d4a7cca39dd        nginx               "nginx -g daemon off;"   About an hour ago   Up About an hour    192.168.1.5:8080->80/tcp   web01

[gowatana@client01 ~]$

 

コンテナ VM も増えます。

vic-10ga-23.png

 

3. コンテナのサービスへのアクセス。

 

VIC では、コンテナ VM ではなく VCH エンドポイントを経由して、コンテナの提供するサービスにアクセスします。

web01 コンテナの起動時に「-p 8080:80」とポート番号を指定していました。

コンテナ VM ではなく、VCH エンドポイントのアドレス (192.168.1.5) の 8080 番ポートにアクセスすると、
Nginx の Welcome ページが表示されます。

vic-10ga-31.png

 

ちょうど VIC 1.0 が GA されたので、いきおいでためしてみましたが

機会をみつけて、活用方法を探ってみたいと思います。

たしかに、VIC は既存の vSphere 環境を殆ど構成変更することなく利用することができます。

 

以上、VIC Engine をためしてみる話でした。

 

こちらもどうぞ。

vSphere Integrated Containers Engine の vSphere Web Client Plug-In から見る Docker 情報。

Can I use a PCI Passthrough to a USB controller if I run ESXi from USB?

$
0
0

Hi.

 

I installed a ESXi 6.5 server and moved my VM's from old server (5.5) to the new one.

 

The new server runs from a USB stick and my datastores is on a SSD drive.

In one VM i use som USB-adapters, so i connect the usb-adapters I use to the VM.

But that is not without problems.

 

firsh I head this problem: https://labs.vmware.com/flings/esxi-embedded-host-client/bugs/116

Then I head this problem: (unable to dismount USB devices) https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2068645

 

But what is I buy a new PCI-Express USB-conroller card and then use PCI-Passthrough  to it, will that work, og will I have the same problems?

Viewing all 195209 articles
Browse latest View live




Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>
<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596344.js" async> </script>